Biblio
Additive Manufacturing (AM) uses Cyber-Physical Systems (CPS) (e.g., 3D Printers) that are vulnerable to kinetic cyber-attacks. Kinetic cyber-attacks cause physical damage to the system from the cyber domain. In AM, kinetic cyber-attacks are realized by introducing flaws in the design of the 3D objects. These flaws may eventually compromise the structural integrity of the printed objects. In CPS, researchers have designed various attack detection method to detect the attacks on the integrity of the system. However, in AM, attack detection method is in its infancy. Moreover, analog emissions (such as acoustics, electromagnetic emissions, etc.) from the side-channels of AM have not been fully considered as a parameter for attack detection. To aid the security research in AM, this paper presents a novel attack detection method that is able to detect zero-day kinetic cyber-attacks on AM by identifying anomalous analog emissions which arise as an outcome of the attack. This is achieved by statistically estimating functions that map the relation between the analog emissions and the corresponding cyber domain data (such as G-code) to model the behavior of the system. Our method has been tested to detect potential zero-day kinetic cyber-attacks in fused deposition modeling based AM. These attacks can physically manifest to change various parameters of the 3D object, such as speed, dimension, and movement axis. Accuracy, defined as the capability of our method to detect the range of variations introduced to these parameters as a result of kinetic cyber-attacks, is 77.45%.
The security of critical infrastructures such as oil and gas cyber-physical systems is a significant concern in today's world where malicious activities are frequent like never before. On one side we have cyber criminals who compromise cyber infrastructure to control physical processes; we also have physical criminals who attack the physical infrastructure motivated to destroy the target or to steal oil from pipelines. Unfortunately, due to limited resources and physical dispersion, it is impossible for the system administrator to protect each target all the time. In this research paper, we tackle the problem of cyber and physical attacks on oil pipeline infrastructure by proposing a Stackelberg Security Game of three players: system administrator as a leader, cyber and physical attackers as followers. The novelty of this paper is that we have formulated a real world problem of oil stealing using a game theoretic approach. The game has two different types of targets attacked by two distinct types of adversaries with different motives and who can coordinate to maximize their rewards. The solution to this game assists the system administrator of the oil pipeline cyber-physical system to allocate the cyber security controls for the cyber targets and to assign patrol teams to the pipeline regions efficiently. This paper provides a theoretical framework for formulating and solving the above problem.
Multilateration techniques have been proposed to verify the integrity of unprotected location claims in wireless localization systems. A common assumption is that the adversary is equipped with only a single device from which it transmits location spoofing signals. In this paper, we consider a more advanced model where the attacker is equipped with multiple devices and performs a geographically distributed coordinated attack on the multilateration system. The feasibility of a distributed multi-device attack is demonstrated experimentally with a self-developed attack implementation based on multiple COTS software-defined radio (SDR) devices. We launch an attack against the OpenSky Network, an air traffic surveillance system that implements a time-difference-of-arrival (TDoA) multi-lateration method for aircraft localization based on ADS-B signals. Our experiments show that the timing errors for distributed spoofed signals are indistinguishable from the multilateration errors of legitimate aircraft signals, indicating that the threat of multi-device spoofing attacks is real in this and other similar systems. In the second part of this work, we investigate physical-layer features that could be used to detect multi-device attacks. We show that the frequency offset and transient phase noise of the attacker's radio devices can be exploited to discriminate between a received signal that has been transmitted by a single (legitimate) transponder or by multiple (malicious) spoofing sources. Based on that, we devise a multi-device spoofing detection system that achieves zero false positives and a false negative rate below 1%.
Securing visible light communication (VLC) systems on the physical layer promises to prevent against a variety of attacks. Recent work shows that the adaption of existing legacy radio wave physical layer security (PLS) mechanisms is possible with minor changes. Yet, many adaptations open new vulnerabilities due to distinct propagation characteristics of visible light. A common understanding of threats arising from various attacker capabilities is missing. We specify a new attacker model for visible light physical layer attacks and evaluate the applicability of existing PLS approaches. Our results show that many attacks are not considered in current solutions.
In this work, we constructively combine adaptive wormholes with channel-reciprocity based key establishment (CRKE), which has been proposed as a lightweight security solution for IoT devices and might be even more important for the 5G Tactile Internet and its embedded low-end devices. We present a new secret key generation protocol where two parties compute shared cryptographic keys under narrow-band multi-path fading models over a delayed digital channel. The proposed approach furthermore enables distance-bounding the key establishment process via the coherence time dependencies of the wireless channel. Our scheme is thoroughly evaluated both theoretically and practically. For the latter, we used a testbed based on the IEEE 802.15.4 standard and performed extensive experiments in a real-world manufacturing environment. Additionally, we demonstrate adaptive wormhole attacks (AWOAs) and their consequences on several physical-layer security schemes. Furthermore, we proposed a countermeasure that minimizes the risk of AWOAs.
Physical layer security for wireless communication is broadly considered as a promising approach to protect data confidentiality against eavesdroppers. However, despite its ample theoretical foundation, the transition to practical implementations of physical-layer security still lacks success. A close inspection of proven vulnerable physical-layer security designs reveals that the flaws are usually overlooked when the scheme is only evaluated against an inferior, single-antenna eavesdropper. Meanwhile, the attacks exposing vulnerabilities often lack theoretical justification. To reduce the gap between theory and practice, we posit that a physical-layer security scheme must be studied under multiple adversarial models to fully grasp its security strength. In this regard, we evaluate a specific physical-layer security scheme, i.e. orthogonal blinding, under multiple eavesdropper settings. We further propose a practical "ciphertext-only attack" that allows eavesdroppers to recover the original message by exploiting the low entropy fields in wireless packets. By means of simulation, we are able to reduce the symbol error rate at an eavesdropper below 1% using only the eavesdropper's receiving data and a general knowledge about the format of the wireless packets.
While attacks on information systems have for most practical purposes binary outcomes (information was manipulated/eavesdropped, or not), attacks manipulating the sensor or control signals of Industrial Control Systems (ICS) can be tuned by the attacker to cause a continuous spectrum in damages. Attackers that want to remain undetected can attempt to hide their manipulation of the system by following closely the expected behavior of the system, while injecting just enough false information at each time step to achieve their goals. In this work, we study if attack-detection can limit the impact of such stealthy attacks. We start with a comprehensive review of related work on attack detection schemes in the security and control systems community. We then show that many of those works use detection schemes that are not limiting the impact of stealthy attacks. We propose a new metric to measure the impact of stealthy attacks and how they relate to our selection on an upper bound on false alarms. We finally show that the impact of such attacks can be mitigated in several cases by the proper combination and configuration of detection schemes. We demonstrate the effectiveness of our algorithms through simulations and experiments using real ICS testbeds and real ICS systems.
Proactive security review and test efforts are a necessary component of the software development lifecycle. Since resource limitations often preclude reviewing, testing and fortifying the entire code base, prioritizing what code to review/test can improve a team's ability to find and remove more vulnerabilities that are reachable by an attacker. One way that professionals perform this prioritization is the identification of the attack surface of software systems. However, identifying the attack surface of a software system is non-trivial. The goal of this poster is to present the concept of a risk-based attack surface approximation based on crash dump stack traces for the prioritization of security code rework efforts. For this poster, we will present results from previous efforts in the attack surface approximation space, including studies on its effectiveness in approximating security relevant code for Windows and Firefox. We will also discuss future research directions for attack surface approximation, including discovery of additional metrics from stack traces and determining how many stack traces are required for a good approximation.
Given a history of detected malware attacks, can we predict the number of malware infections in a country? Can we do this for different malware and countries? This is an important question which has numerous implications for cyber security, right from designing better anti-virus software, to designing and implementing targeted patches to more accurately measuring the economic impact of breaches. This problem is compounded by the fact that, as externals, we can only detect a fraction of actual malware infections. In this paper we address this problem using data from Symantec covering more than 1.4 million hosts and 50 malware spread across 2 years and multiple countries. We first carefully design domain-based features from both malware and machine-hosts perspectives. Secondly, inspired by epidemiological and information diffusion models, we design a novel temporal non-linear model for malware spread and detection. Finally we present ESM, an ensemble-based approach which combines both these methods to construct a more accurate algorithm. Using extensive experiments spanning multiple malware and countries, we show that ESM can effectively predict malware infection ratios over time (both the actual number and trend) upto 4 times better compared to several baselines on various metrics. Furthermore, ESM's performance is stable and robust even when the number of detected infections is low.
Security requirements around software systems have become more stringent as society becomes more interconnected via the Internet. New ways of prioritizing security efforts are needed so security professionals can use their time effectively to find security vulnerabilities or prevent them from occurring in the first place. The goal of this work is to help software development teams prioritize security efforts by approximating the attack surface of a software system via stack trace analysis. Automated attack surface approximation is a technique that uses crash dump stack traces to predict what code may contain exploitable vulnerabilities. If a code entity (a binary, file or function) appears on stack traces, then Attack Surface Approximation (ASA) considers that code entity is on the attack surface of the software system. We also explore whether number of appearances of code on stack traces correlates with where security vulnerabilities are found. To date, feasibility studies of ASA have been performed on Windows 8 and 8.1, and Mozilla Firefox. The results from these studies indicate that ASA may be useful for practitioners trying to secure their software systems. We are now working towards establishing the ground truth of what the attack surface of software systems is, along with looking at how ASA could change over time, among other metrics.
The threat of DDOS and other cyberattacks has increased during the last decade. In addition to the radical increase in the number of attacks, they are also becoming more sophisticated with the targets ranging from ordinary users to service providers and even critical infrastructure. According to some resources, the sophistication of attacks is increasing faster than the mitigating actions against them. For example determining the location of the attack origin is becoming impossible as cyber attackers employ specific means to evade detection of the attack origin by default, such as using proxy services and source address spoofing. The purpose of this paper is to initiate discussion about effective Internet Protocol traceback mechanisms that are needed to overcome this problem. We propose an approach for traceback that is based on extensive use of security metrics before (proactive) and during (reactive) the attacks.
In this paper, we describe the results of several experiments designed to test two dynamic network moving target defenses against a propagating data exfiltration attack. We designed a collection of metrics to assess the costs to mission activities and the benefits in the face of attacks and evaluated the impacts of the moving target defenses in both areas. Experiments leveraged Siege's Cyber-Quantification Framework to automatically provision the networks used in the experiment, install the two moving target defenses, collect data, and analyze the results. We identify areas in which the costs and benefits of the two moving target defenses differ, and note some of their unique performance characteristics.
Most cyber network attacks begin with an adversary gaining a foothold within the network and proceed with lateral movement until a desired goal is achieved. The mechanism by which lateral movement occurs varies but the basic signature of hopping between hosts by exploiting vulnerabilities is the same. Because of the nature of the vulnerabilities typically exploited, lateral movement is very difficult to detect and defend against. In this paper we define a dynamic reachability graph model of the network to discover possible paths that an adversary could take using different vulnerabilities, and how those paths evolve over time. We use this reachability graph to develop dynamic machine-level and network-level impact scores. Lateral movement mitigation strategies which make use of our impact scores are also discussed, and we detail an example using a freely available data set.
The wide use of cloud computing and of data outsourcing rises important concerns with regards to data security resulting thus in the necessity of protection mechanisms such as encryption of sensitive data. The recent major theoretical breakthrough of finding the Holy Grail of encryption, i.e. fully homomorphic encryption guarantees the privacy of queries and their results on encrypted data. However, there are only a few studies proposing a practical performance evaluation of the use of homomorphic encryption schemes in order to perform database queries. In this paper, we propose and analyse in the context of a secure framework for a generic database query interpreter two different methods in which client requests are dynamically executed on homomorphically encrypted data. Dynamic compilation of the requests allows to take advantage of the different optimizations performed during an off-line step on an intermediate code representation, taking the form of boolean circuits, and, moreover, to specialize the execution using runtime information. Also, for the returned encrypted results, we assess the complexity and the efficiency of the different protocols proposed in the literature in terms of overall execution time, accuracy and communication overhead.
In this paper, we propose a practical and efficient word and phrase proximity searchable encryption protocols for cloud based relational databases. The proposed advanced searchable encryption protocols are provably secure. We formalize the security assurance with cryptographic security definitions and prove the security of our searchable encryption protocols under Shannon's perfect secrecy assumption. We have tested the proposed protocols comprehensively on Amazon's high performance computing server using mysql database and presented the results. The proposed protocols ensure that there is zero overhead of space and communication because cipher text size being equal to plaintext size. For the same reason, the database schema also does not change for existing applications. In this paper, we also present results of comprehensive analysis for Song, Wagner, and Perrig scheme.
Identity concealment and zero-round trip time (0-RTT) connection are two of current research focuses in the design and analysis of secure transport protocols, like TLS1.3 and Google's QUIC, in the client-server setting. In this work, we introduce a new primitive for identity-concealed authenticated encryption in the public-key setting, referred to as higncryption, which can be viewed as a novel monolithic integration of public-key encryption, digital signature, and identity concealment. We then present the security definitional framework for higncryption, and a conceptually simple (yet carefully designed) protocol construction. As a new primitive, higncryption can have many applications. In this work, we focus on its applications to 0-RTT authentication, showing higncryption is well suitable to and compatible with QUIC and OPTLS, and on its applications to identity-concealed authenticated key exchange (CAKE) and unilateral CAKE (UCAKE). Of independent interest is a new concise security definitional framework for CAKE and UCAKE proposed in this work, which unifies the traditional BR and (post-ID) frameworks, enjoys composability, and ensures very strong security guarantee. Along the way, we make a systematically comparative study with related protocols and mechanisms including Zheng's signcryption, one-pass HMQV, QUIC, TLS1.3 and OPTLS, most of which are widely standardized or in use.
Untrusted third-parties are found throughout the integrated circuit (IC) design flow resulting in potential threats in IC reliability and security. Threats include IC counterfeiting, intellectual property (IP) theft, IC overproduction, and the insertion of hardware Trojans. Logic encryption has emerged as a method of enhancing security against such threats, however, current implementations of logic encryption, including the XOR or look-up table (LUT) techniques, have high per-gate overheads in area, performance, and power. A novel gate level logic encryption technique with reduced per-gate overheads is described in this paper. In addition, a technique to expand the search space of a key sequence is provided, increasing the difficulty for an adversary to extract the key value. A power reduction of 41.50%, an estimated area reduction of 43.58%, and a performance increase of 34.54% is achieved when using the proposed gate level logic encryption instead of the LUT based technique for an encrypted AND gate.
Logistic regression is a powerful machine learning tool to classify data. When dealing with sensitive data such as private or medical information, cares are necessary. In this paper, we propose a secure system for protecting the training data in logistic regression via homomorphic encryption. Perhaps surprisingly, despite the non-polynomial tasks of training in logistic regression, we show that only additively homomorphic encryption is needed to build our system. Our system is secure and scalable with the dataset size.
Code clone detection is an important problem for software maintenance and evolution. Many approaches consider either structure or identifiers, but none of the existing detection techniques model both sources of information. These techniques also depend on generic, handcrafted features to represent code fragments. We introduce learning-based detection techniques where everything for representing terms and fragments in source code is mined from the repository. Our code analysis supports a framework, which relies on deep learning, for automatically linking patterns mined at the lexical level with patterns mined at the syntactic level. We evaluated our novel learning-based approach for code clone detection with respect to feasibility from the point of view of software maintainers. We sampled and manually evaluated 398 file- and 480 method-level pairs across eight real-world Java systems; 93% of the file- and method-level samples were evaluated to be true positives. Among the true positives, we found pairs mapping to all four clone types. We compared our approach to a traditional structure-oriented technique and found that our learning-based approach detected clones that were either undetected or suboptimally reported by the prominent tool Deckard. Our results affirm that our learning-based approach is suitable for clone detection and a tenable technique for researchers.
Transformations form an important part of developing domain specific languages, where they are used to provide semantics for typing and evaluation. Yet, few solutions exist for verifying transformations written in expressive high-level transformation languages. We take a step towards that goal, by developing a general symbolic execution technique that handles programs written in these high-level transformation languages. We use logical constraints to describe structured symbolic values, including containment, acyclicity, simple unordered collections (sets) and to handle deep type-based querying of syntax hierarchies. We evaluate this symbolic execution technique on a collection of refactoring and model transformation programs, showing that the white-box test generation tool based on symbolic execution obtains better code coverage than a black box test generator for such programs in almost all tested cases.
Abstractions make building complex systems possible. Many facilities provided by a modern programming language are directly designed to build a certain style of abstraction. Abstractions also aim to enhance code reusability, thus enhancing programmer productivity and effectiveness. Real-world software systems can grow to have a complicated hierarchy of abstractions. Often, the hierarchy grows unnecessarily deep, because the programmers have envisioned the most generic use cases for a piece of code to make it reusable. Sometimes, the abstractions used in the program are not the appropriate ones, and it would be simpler for the higher level client to circumvent such abstractions. Another problem is the impedance mismatch between different pieces of code or libraries coming from different projects that are not designed to work together. Interoperability between such libraries are often hindered by abstractions, by design, in the name of hiding implementation details and encapsulation. These problems necessitate forms of abstraction that are easy to manipulate if needed. In this paper, we describe a powerful mechanism to create white-box abstractions, that encourage flatter hierarchies of abstraction and ease of manipulation and customization when necessary: program refinement. In so doing, we rely on the basic principle that writing directly in the host programming language is as least restrictive as one can get in terms of expressiveness, and allow the programmer to reuse and customize existing code snippets to address their specific needs.
Embedded devices with constrained computational resources, such as wireless sensor network nodes, electronic tag readers, roadside units in vehicular networks, and smart watches and wristbands, are widely used in the Internet of Things. Many of such devices are deployed in untrustable environments, and others may be easy to lose, leading to possible capture by adversaries. Accordingly, in the context of security research, these devices are running in the white-box attack context, where the adversary may have total visibility of the implementation of the built-in cryptosystem with full control over its execution. It is undoubtedly a significant challenge to deal with attacks from a powerful adversary in white-box attack contexts. Existing encryption algorithms for white-box attack contexts typically require large memory use, varying from one to dozens of megabytes, and thus are not suitable for resource-constrained devices. As a countermeasure in such circumstances, we propose an ultra-lightweight encryption scheme for protecting the confidentiality of data in white-box attack contexts. The encryption is executed with secret components specialized for resource-constrained devices against white-box attacks, and the encryption algorithm requires a relatively small amount of static data, ranging from 48 to 92 KB. The security and efficiency of the proposed scheme have been theoretically analyzed with positive results, and experimental evaluations have indicated that the scheme satisfies the resource constraints in terms of limited memory use and low computational cost.
Complex traffic networks include a number of controlled intersections, and, commonly, multiple districts or municipalities. The result is that the overall traffic control problem is extremely complex computationally. Moreover, given that different municipalities may have distinct, non-aligned, interests, traffic light controller design is inherently decentralized, a consideration that is almost entirely absent from related literature. Both complexity and decentralization have great bearing both on the quality of the traffic network overall, as well as on its security. We consider both of these issues in a dynamic traffic network. First, we propose an effective local search algorithm to efficiently design system-wide control logic for a collection of intersections. Second, we propose a game theoretic (Stackelberg game) model of traffic network security in which an attacker can deploy denial-of-service attacks on sensors, and develop a resilient control algorithm to mitigate such threats. Finally, we propose a game theoretic model of decentralization, and investigate this model both in the context of baseline traffic network design, as well as resilient design accounting for attacks. Our methods are implemented and evaluated using a simple traffic network scenario in SUMO.
Emergency message delivery in packet networks is promising in terms of resiliency to failures and service delivery to handicapped persons. In this paper, we propose an NDN(Named Data Networking)-based emergency message delivery mechanism by leveraging multicasting and ABE (Attribute-Based Encryption) functions.
Bulk electric systems include hundreds of synchronous generators. Faults in such systems can induce oscillations in the generators which if not detected and controlled can destabilize the system. Mode estimation is a popular method for oscillation detection. In this paper, we propose a resilient algorithm to estimate electro-mechanical oscillation modes in large scale power system in the presence of false data. In particular, we add a fault tolerance mechanism to a variant of alternating direction method of multipliers (ADMM) called S-ADMM. We evaluate our method on an IEEE 68-bus test system under different attack scenarios and show that in all the scenarios our algorithm converges well.