Biblio

Filters: Keyword is 2019: April  [Clear All Filters]
2019-03-20
Shubham Goyal, Nirav Ajmeri, Munindar P. Singh.  2019.  Applying Norms and Sanctions to Promote Cybersecurity Hygiene. Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS). :1–3.

Many cybersecurity breaches occur due to users not following security regulations, chief among them regulations pertaining to what might be termed hygiene---including applying software patches to operating systems, updating software applications, and maintaining strong passwords. 

We capture cybersecurity expectations on users as norms. We empirically investigate sanctioning mechanisms in promoting compliance with those norms as well as the detrimental effect of sanctions on the ability of users to complete their work. We do so by developing a game that emulates the decision making of workers in a research lab. 

We find that relative to group sanctions, individual sanctions are more effective in achieving compliance and less detrimental on the ability of users to complete their work.
Our findings have implications for workforce training in cybersecurity.

Extended abstract

2018-07-09
Anirudh Narasimman, Qiaozhi Wang, Fengjun Li, Dongwon Lee, Bo Luo.  2019.  Arcana: Enabling Private Posts on Public Microblog Platforms. 34rd International Information Security and Privacy Conference (IFIP SEC).

Many popular online social networks, such as Twitter, Tum-blr, and Sina Weibo, adopt too simple privacy models to satisfy users’diverse needs for privacy protection. In platforms with no (i.e., completely open) or binary (i.e., “public” and “friends-only”) access con-trol, users cannot control the dissemination boundary of the contentthey share. For instance, on Twitter, tweets in “public” accounts areaccessible to everyone including search engines, while tweets in “pro-tected” accounts are visible toallthe followers. In this work, we presentArcanato  enable  fine-grained access control for social network content sharing. In particular, we target the Twitter platform and intro-duce the “private tweet” function, which allows users to disseminateparticular tweets to designated group(s) of followers. Arcana employsCiphertext-Policy Attribute-based Encryption (CP-ABE) to implement social circle detection and private tweet encryption so that  access-controlled  tweets  are  only  readable  by  designated  recipients.  To  bestealthy, Arcana further embeds the protected content as digital water-marks in image tweets. We have implemented the Arcana prototype asa Chrome browser plug-in, and demonstrated its flexibility and effec-tiveness. Different from existing approaches that require trusted third-parties or additional server/broker/mediator, Arcana is light-weight andcompletely transparent to Twitter – all the communications, includingkey distribution and private tweet dissemination, are exchanged as Twit-ter messages. Therefore, with small API modifications, Arcana could beeasily ported to other online social networking platforms to support fine-grained access control.

2019-04-15
Petz, Adam, Alexander, Perry.  2019.  A Copland Attestation Manager. Hot Topics in Science of Security (HoTSoS'19).
John Ramsdell, Paul Rowe, Perry Alexander, Sarah Helble, Peter Loscocco, J. Aaron Pendergrass, Adam Petz.  2019.  Orchestrating Layered Attestations. Principles of Security and Trust (POST’19). 11426:197-221.

We present Copland, a language for specifying layered attestations. Layered attestations provide a remote appraiser with structured evidence of the integrity of a target system to support a trust decision. The language is designed to bridge the gap between formal analysis of attestation security guarantees and concrete implementations. We therefore provide two semantic interpretations of terms in our language. The first is a denotational semantics in terms of partially ordered sets of events. This directly connects Copland to prior work on layered attestation. The second is an operational semantics detailing how the data and control flow are executed. This gives explicit implementation guidance for attestation frameworks. We show a formal connection between the two semantics ensuring that any execution according to the operational semantics is consistent with the denotational event semantics. This ensures that formal guarantees resulting from analyzing the event semantics will hold for executions respecting the operational semantics. All results have been formally verified with the Coq proof assistant.

2020-03-09
Farzad Farshchi, Qijing Huang, Heechul Yun.  2019.  Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim. Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications.

NVDLA is an open-source deep neural network (DNN) accelerator which has received a lot of attention by the community since its introduction by Nvidia. It is a full-featured hardware IP and can serve as a good reference for conducting research and development of SoCs with integrated accelerators. However, an expensive FPGA board is required to do experiments with this IP in a real SoC. Moreover, since NVDLA is clocked at a lower frequency on an FPGA, it would be hard to do accurate performance analysis with such a setup. To overcome these limitations, we integrate NVDLA into a real RISC-V SoC on the Amazon could FPGA using FireSim, a cycle-exact FPGA-accelerated simulator. We then evaluate the performance of NVDLA by running YOLOv3 object-detection algorithm. Our results show that NVDLA can sustain 7.5 fps when running YOLOv3. We further analyze the performance by showing that sharing the last-level cache with NVDLA can result in up to 1.56x speedup. We then identify that sharing the memory system with the accelerator can result in unpredictable execution time for the real-time tasks running on this platform. We believe this is an important issue that must be addressed in order for on-chip DNN accelerators to be incorporated in real-time embedded systems.