Biblio
We present a unified framework for studying secure multiparty computation (MPC) with arbitrarily restricted interaction patterns such as a chain, a star, a directed tree, or a directed graph. Our study generalizes both standard MPC and recent models for MPC with specific restricted interaction patterns, such as those studied by Halevi et al. (Crypto 2011), Goldwasser et al. (Eurocrypt 2014), and Beimel et al. (Crypto 2014). Since restricted interaction patterns cannot always yield full security for MPC, we start by formalizing the notion of "best possible security" for any interaction pattern. We then obtain the following main results: Completeness theorem. We prove that the star interaction pattern is complete for the problem of MPC with general interaction patterns. Positive results. We present both information-theoretic and computationally secure protocols for computing arbitrary functions with general interaction patterns. We also present more efficient protocols for computing symmetric functions, both in the computational and in the information-theoretic setting. Our computationally secure protocols for general functions necessarily rely on indistinguishability obfuscation while the ones for computing symmetric functions make simple use of multilinear maps. Negative results. We show that, in many cases, the complexity of our information-theoretic protocols is essentially the best that can be achieved. All of our protocols rely on a correlated randomness setup, which is necessary in our setting (for computing general functions). In the computational case, we also present a generic procedure to make any correlated randomness setup reusable, in the common random string model. Although most of our information-theoretic protocols have exponential complexity, they may be practical for functions on small domains (e.g., f0; 1g20), where they are concretely faster than their computational counterparts.
A digital microfluidic biochip (DMFB) is an emerging technology that enables miniaturized analysis systems for point-of-care clinical diagnostics, DNA sequencing, and environmental monitoring. A DMFB reduces the rate of sample and reagent consumption, and automates the analysis of assays. In this paper, we provide the first assessment of the security vulnerabilities of DMFBs. We identify result-manipulation attacks on a DMFB that maliciously alter the assay outcomes. Two practical result-manipulation attacks are shown on a DMFB platform performing enzymatic glucose assay on serum. In the first attack, the attacker adjusts the concentration of the glucose sample and thereby modifies the final result. In the second attack, the attacker tampers with the calibration curve of the assay operation. We then identify denial-of-service attacks, where the attacker can disrupt the assay operation by tampering either with the droplet-routing algorithm or with the actuation sequence. We demonstrate these attacks using a digital microfluidic synthesis simulator. The results show that the attacks are easy to implement and hard to detect. Therefore, this work highlights the need for effective protections against malicious modifications in DMFBs.
This paper proposes a novel measure for edge significance considering quantity propagation in a graph. Our method utilizes a pseudo propagation process brought by solving a problem of a load balancing on nodes. Edge significance is defined as a difference of propagation in a graph with an edge to without it. The simulation compares our proposed method with the traditional betweenness centrality in order to obtain differences of our measure to a type of centrality, which considers propagation process in a graph.
Riding on the success of SDN for enterprise and data center networks, recently researchers have shown much interest in applying SDN for critical infrastructures. A key concern, however, is the vulnerability of the SDN controller as a single point of failure. In this paper, we develop a cyber-physical simulation platform that interconnects Mininet (an SDN emulator), hardware SDN switches, and PowerWorld (a high-fidelity, industry-strength power grid simulator). We report initial experiments on how a number of representative controller faults may impact the delay of smart grid communications. We further evaluate how this delay may affect the performance of the underlying physical system, namely automatic gain control (AGC) as a fundamental closed-loop control that regulates the grid frequency to a critical nominal value. Our results show that when the fault-induced delay reaches seconds (e.g., more than four seconds in some of our experiments), degradation of the AGC becomes evident. Particularly, the AGC is most vulnerable when it is in a transient following say step changes in loading, because the significant state fluctuations will exacerbate the effects of using a stale system state in the control.
Providing a global understanding of privacy is crucial, because everything is connected. Nowadays companies are providing their customers with more services that will give them more access to their data and daily activity; electricity companies are marketing the new smart meters as a new service with great benefit to reduce the electricity usage by monitoring the electricity reading in real time. Although the users might benefit from this extra service, it will compromise the privacy of the users by having constant access to the readings. Since the smart meters will provide the users with real electricity readings, they will be able to decide and identify which devices are consuming energy in that specific moment and how much it will cost. This kind of information can be exploited by numerous types of people. Unauthorized use of this information is an invasion of privacy and may lead to much more severe consequences. This paper will propose an algorithm approach for the comparison and analysis of Smart Meter data readings, considering the time and temperature factors at each second to identify the use patterns at each house by identifying the appliances activities at each second in time complexity O(log(m)).
Social Engineering is a kind of advance persistent threat (APT) that gains private and sensitive information through social networks or other types of communication. The attackers can use social engineering to obtain access into social network accounts and stays there undetected for a long period of time. The purpose of the attack is to steal sensitive data and spread false information rather than to cause direct damage. Such targets can include Facebook accounts of government agencies, corporations, schools or high-profile users. We propose to use IDS, Intrusion Detection System, to battle such attacks. What the social engineering does is try to gain easy access, so that the attacks can be repeated and ongoing. The focus of this study is to find out how this type of attacks are carried out so that they can properly detected by IDS in future research.
The number of wearable and smart devices which are connecting every day in the Internet of Things (IoT) is continuously growing. We have a great opportunity though to improve the quality of life (QoL) standards by adding medical value to these devices. Especially, by exploiting IoT technology, we have the potential to create useful tools which utilize the sensors to provide biometric data. This novel study aims to use a smartwatch, independent from other hardware, to predict the Blood Pressure (BP) drop caused by postural changes. In cases that the drop is due to orthostatic hypotension (OH) can cause dizziness or even faint factors, which increase the risk of fall in the elderly but, as well as, in younger groups of people. A mathematical prediction model is proposed here which can reduce the risk of fall due to OH by sensing heart rate variability (data and drops in systolic BP after standing in a healthy group of 10 subjects. The experimental results justify the efficiency of the model, as it can perform correct prediction in 86.7% of the cases, and are encouraging enough for extending the proposed approach to pathological cases, such as patients with Parkinson's disease, involving large scale experiments.
Integrity is a crucial property in current computing systems. Due to natural or human-made (malicious and non-malicious) faults this property can be violated. Therefore, many methodologies and patterns that check or verify the integrity of systems or data have been introduced. However, integrity as a property cannot be identified directly. Existing methodologies tackle this problem by identifying other, computable, properties of the system and use a policy that describes how these properties reflect the integrity of the overall system. It is thus a critical task to select the right properties that reflect the integrity of a system in such a way that given integrity requirements are met. To ease this process, we introduce two new patterns, Static Integrity Properties and Dynamic Integrity Properties to classify the properties. Static Integrity Properties are used to ensure the integrity of a component prior it's use (e.g., the integrity of an executable binary), while Dynamic Integrity Properties are used to ensure the integrity of a component during run-time (e.g., properties that reflect the component's behavior or state transitions). Based on an exemplary embedded control system, we show typical use cases to help the system or software architect to choose the right class of integrity properties for the targeted system.
Application of lightweight block ciphers in the TLS protocol is studied in this paper. More precisely, since the use of lightweight cryptographic algorithms is prerequisite for addressing security in highly constrained environments such as the Internet of Things, we focus on the behavior of the TLS performance in case that AES is being replaced by a lightweight block cipher; to this end, the recently proposed Speck cipher is being used as a case study. Experimental results exhibit that significant gain in performance can be achieved in such constrained environments, whereas in some cases Speck with larger key size than AES may also result in higher throughput.
Lightweight block ciphers, which are required for IoT devices, have attracted attention. Simeck, which is one of the most popular lightweight block ciphers, can be implemented on IoT devices in the smallest area. Regarding the hardware security, the threat of electromagnetic analysis has been reported. However, electromagnetic analysis of Simeck has not been reported. Therefore, this study proposes a dedicated electromagnetic analysis for a lightweight block cipher Simeck to ensure the safety of IoT devices in the future. To our knowledge, this is the first electromagnetic analysis for Simeck. Experiments using a FPGA prove the validity of the proposed method.
UnlimitID is a method for enhancing the privacy of commodity OAuth and applications such as OpenID Connect, using anonymous attribute-based credentials based on algebraic Message Authentication Codes (aMACs). OAuth is one of the most widely used protocols on the Web, but it exposes each of the requests of a user for data by each relying party (RP) to the identity provider (IdP). Our approach allows for the creation of multiple persistent and unlinkable pseudo-identities and requires no change in the deployed code of relying parties, only in identity providers and the client.
Cloud providers typically implement abstractions for network virtualization on the server, within the operating system that hosts the tenant virtual machines or containers. Despite being flexible and convenient, this approach has fundamental problems: incompatibility with bare-metal support, unnecessary performance overhead, and susceptibility to hypervisor breakouts. To solve these, we propose to offload the implementation of network-virtualization abstractions to the top-of-rack switch (ToR). To show that this is feasible and beneficial, we present VNToR, a ToR that takes over the implementation of the security-group abstraction. Our prototype combines commodity switching hardware with a custom software stack and is integrated in OpenStack Neutron. We show that VNToR can store tens of thousands of access rules, adapts to traffic-pattern changes in less than a millisecond, and significantly outperforms the state of the art.
Internet has shown itself to be a catalyst for economic growth and social equity but its potency is thwarted by the fact that the Internet is off limits for the vast majority of human beings. Mobile phones—the fastest growing technology in the world that now reaches around 80% of humanity—can enable universal Internet access if it can resolve coverage problems that have historically plagued previous cellular architectures (2G, 3G, and 4G). These conventional architectures have not been able to sustain universal service provisioning since these architectures depend on having enough users per cell for their economic viability and thus are not well suited to rural areas (which are by definition sparsely populated). The new generation of mobile cellular technology (5G), currently in a formative phase and expected to be finalized around 2020, is aimed at orders of magnitude performance enhancement. 5G offers a clean slate to network designers and can be molded into an architecture also amenable to universal Internet provisioning. Keeping in mind the great social benefits of democratizing Internet and connectivity, we believe that the time is ripe for emphasizing universal Internet provisioning as an important goal on the 5G research agenda. In this paper, we investigate the opportunities and challenges in utilizing 5G for global access to the Internet for all (GAIA). We have also identified the major technical issues involved in a 5G-based GAIA solution and have set up a future research agenda by defining open research problems.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
Firewall policies are notorious for having misconfiguration errors which can defeat its intended purpose of protecting hosts in the network from malicious users. We believe this is because today's firewall policies are mostly monolithic. Inspired by ideas from modular programming and code refactoring, in this work we introduce three kinds of modules: primary, auxiliary, and template, which facilitate the refactoring of a firewall policy into smaller, reusable, comprehensible, and more manageable components. We present algorithms for generating each of the three modules for a given legacy firewall policy. We also develop ModFP, an automated tool for converting legacy firewall policies represented in access control list to their modularized format. With the help of ModFP, when examining several real-world policies with sizes ranging from dozens to hundreds of rules, we were able to identify subtle errors.
In this paper we describe the JavaScript Interface for Visu- alization of User Interaction (JIVUI): a modular, Web-based, and customizable visualization tool that shows an animation of the trace of a user interaction with a graphical interface, or of predictions made by cognitive models of user interaction. Any combination of gaze, mouse, and keyboard data can be repro- duced within a user-provided interface. Although customiz- able, the tool includes a series of plug-ins to support common visualization tasks, including a timeline of input device events and perceptual and cognitive operators based on the Model Hu- man Processor and TYPIST. We talk about our use of this tool to support hypothesis generation, assumption validation, and to guide our modeling efforts.
AbstractCyber assurance of heavy trucks is a major concern with new designs as well as with supporting legacy systems. Many cyber security experts and analysts are used to working with traditional information technology (IT) networks and are familiar with a set of technologies that may not be directly useful in the commercial vehicle sector. To help connect security researchers to heavy trucks, a remotely accessible testbed has been prototyped for experimentation with security methodologies and techniques to evaluate and improve on existing technologies, as well as developing domain-specific technologies. The testbed relies on embedded Linux-based node controllers that can simulate the sensor inputs to various heavy vehicle electronic control units (ECUs). The node controller also monitors and affects the flow of network information between the ECUs and the vehicle communications backbone. For example, a node controller acts as a clone that generates analog wheel speed sensor data while at the same time monitors or controls the network traffic on the J1939 and J1708 networks. The architecture and functions of the node controllers are detailed. Sample interaction with the testbed is illustrated, along with a discussion of the challenges of running remote experiments. Incorporating high fidelity hardware in the testbed enables security researchers to advance the state of the art in hardening heavy vehicle ECUs against cyber-attacks. How the testbed can be used for security research is presented along with an example of its use in evaluating seed/key exchange strength and in intrusion detection systems (IDSs).
This article describes an emerging direction in the intersection between human–computer interaction and cognitive science: the use of cognitive models to give insight into the challenges of cybersecurity (cyber-SA). The article gives a brief overview of work in different areas of cyber-SA where cognitive modeling research plays a role, with regard to direct interaction between end users and computer systems and with regard to the needs of security analysts working behind the scenes. The problem of distinguishing between human users and automated agents (bots) interacting with computer systems is introduced, as well as ongoing efforts toward building Human Subtlety Proofs (HSPs), persistent and unobtrusive windows into human cognition with direct application to cyber-SA. Two computer games are described, proxies to illustrate different ways in which cognitive modeling can potentially contribute to the development of HSPs and similar cyber-SA applications.
Online access
This article describes an emerging direction in the intersection between human-computer interaction and cognitive science: the use of cognitive models to give insight into the challenges of cybersecurity. The article gives a brief overview of work in different areas of cybersecurity where cognitive modeling research plays a role, with regard to direct interaction between end users and computer systems and with regard to the needs of security analysts working behind the scenes. The problem of distinguishing between human users and automated agents (bots) interacting with computer systems is introduced, as well as ongoing efforts toward building Human Subtlety Proofs, persistent and unobtrusive windows into human cognition with direct application to cybersecurity. Two computer games are described, proxies to illustrate different ways in which cognitive modeling can potentially contribute to the development of HSPs and similar cybersecurity applications.
Fully automated vehicles will require new functionalities for perception, navigation and decision making -- an Autonomous Driving Intelligence (ADI). We consider architectural cases for such functionalities and investigate how they integrate with legacy platforms. The cases range from a robot replacing the driver -- with entire reuse of existing vehicle platforms, to a clean-slate design. Focusing on Heavy Commercial Vehicles (HCVs), we assess these cases from the perspectives of business, safety, dependability, verification, and realization. The original contributions of this paper are the classification of the architectural cases themselves and the analysis that follows. The analysis reveals that although full reuse of vehicle platforms is appealing, it will require explicitly dealing with the accidental complexity of the legacy platforms, including adding corresponding diagnostics and error handling to the ADI. The current fail-safe design of the platform will also tend to limit availability. Allowing changes to the platforms, will enable more optimized designs and fault-operational behaviour, but will require initial higher development cost and specific emphasis on partitioning and control to limit the influences of safety requirements. For all cases, the design and verification of the ADI will pose a grand challenge and relate to the evolution of the regulatory framework including safety standards.