Biblio
Hardware information flow analysis detects security vulnerabilities resulting from unintended design flaws, timing channels, and hardware Trojans. These information flow models are typically generated in a general way, which includes a significant amount of redundancy that is irrelevant to the specified security properties. In this work, we propose a property specific approach for information flow security. We create information flow models tailored to the properties to be verified by performing a property specific search to identify security critical paths. This helps find suspicious signals that require closer inspection and quickly eliminates portions of the design that are free of security violations. Our property specific trimming technique reduces the complexity of the security model; this accelerates security verification and restricts potential security violations to a smaller region which helps quickly pinpoint hardware security vulnerabilities.
This paper presents a study on detecting cyber attacks on industrial control systems (ICS) using convolutional neural networks. The study was performed on a Secure Water Treatment testbed (SWaT) dataset, which represents a scaled-down version of a real-world industrial water treatment plant. We suggest a method for anomaly detection based on measuring the statistical deviation of the predicted value from the observed value. We applied the proposed method by using a variety of deep neural network architectures including different variants of convolutional and recurrent networks. The test dataset included 36 different cyber attacks. The proposed method successfully detected 31 attacks with three false positives thus improving on previous research based on this dataset. The results of the study show that 1D convolutional networks can be successfully used for anomaly detection in industrial control systems and outperform recurrent networks in this setting. The findings also suggest that 1D convolutional networks are effective at time series prediction tasks which are traditionally considered to be best solved using recurrent neural networks. This observation is a promising one, as 1D convolutional neural networks are simpler, smaller, and faster than the recurrent neural networks.
Post-quantum secure communication has attracted much interest in recent years. Known computationally secure post-quantum key agreement protocols are resource intensive for small devices. These devices may need to securely send frequent short messages, for example to report the measurement of a sensor. Secure communication using physical assumptions provides information-theoretic security (and so quantum-safe) with small computational over-head. Security and efficiency analysis of these systems however is asymptotic. In this poster we consider two secure message communication systems, and derive and compare their security and efficiency for finite length messages. Our results show that these systems indeed provide an attractive alternative for post-quantum security.
Resilient control systems should efficiently restore control into physical systems not only after the sabotage of themselves, but also after breaking physical systems. To enhance resilience of control systems, given an originally minimal-input controlled linear-time invariant(LTI) physical system, we address the problem of efficient control recovery into it after removing a known system vertex by finding the minimum number of inputs. According to the minimum input theorem, given a digraph embedded into LTI model and involving a precomputed maximum matching, this problem is modeled into recovering controllability of it after removing a known network vertex. Then, we recover controllability of the residual network by efficiently finding a maximum matching rather than recomputation. As a result, except for precomputing a maximum matching and the following removed vertex, the worst-case execution time of control recovery into the residual LTI physical system is linear.
In order to evaluate the network security risks and implement effective defenses in industrial control system, a risk assessment method for industrial control systems based on attack graphs is proposed. Use the concept of network security elements to translate network attacks into network state migration problems and build an industrial control network attack graph model. In view of the current subjective evaluation of expert experience, the atomic attack probability assignment method and the CVSS evaluation system were introduced to evaluate the security status of the industrial control system. Finally, taking the centralized control system of the thermal power plant as the experimental background, the case analysis is performed. The experimental results show that the method can comprehensively analyze the potential safety hazards in the industrial control system and provide basis for the safety management personnel to take effective defense measures.
Attack graph approach is a common tool for the analysis of network security. However, analysis of attack graphs could be complicated and difficult depending on the attack graph size. This paper presents an approximate analysis approach for attack graphs based on Q-learning. First, we employ multi-host multi-stage vulnerability analysis (MulVAL) to generate an attack graph for a given network topology. Then we refine the attack graph and generate a simplified graph called a transition graph. Next, we use a Q-learning model to find possible attack routes that an attacker could use to compromise the security of the network. Finally, we evaluate the approach by applying it to a typical IT network scenario with specific services, network configurations, and vulnerabilities.
Modern software should satisfy multiple goals simultaneously: it should provide predictable performance, be robust to failures, handle peak loads and deal seamlessly with unexpected conditions and changes in the execution environment. For this to happen, software designs should account for the possibility of runtime changes and provide formal guarantees of the software's behavior. Control theory is one of the possible design drivers for runtime adaptation, but adopting control theoretic principles often requires additional, specialized knowledge. To overcome this limitation, automated methodologies have been proposed to extract the necessary information from experimental data and design a control system for runtime adaptation. These proposals, however, only process one goal at a time, creating a chain of controllers. In this paper, we propose and evaluate the first automated strategy that takes into account multiple goals without separating them into multiple control strategies. Avoiding the separation allows us to tackle a larger class of problems and provide stronger guarantees. We test our methodology's generality with three case studies that demonstrate its broad applicability in meeting performance, reliability, quality, security, and energy goals despite environmental or requirements changes.
In the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their computations to third-party cloud servers. In order to protect service quality, the integrity of computation results need to be guaranteed. In this paper, we develop a game theoretic framework which helps the outsourcer to maximize its payoff while ensuring the desired level of integrity for the outsourced computation. We define two Stackelberg games and analyze the optimal setting's sensitivity for the parameters of the model.
The eleven papers in this special section focus on power electronics-enabled autonomous systems. Power systems are going through a paradigm change from centralized generation to distributed generation and further onto smart grid. Millions of relatively small distributed energy resources (DER), including wind turbines, solar panels, electric vehicles and energy storage systems, and flexible loads are being integrated into power systems through power electronic converters. This imposes great challenges to the stability, scalability, reliability, security, and resiliency of future power systems. This section joins the forces of the communities of control/systems theory, power electronics, and power systems to address various emerging issues of power-electronics-enabled autonomous power systems, paving the way for large-scale deployment of DERs and flexible loads.
Group exponentiation is an important operation used in many public-key cryptosystems and, more generally, cryptographic protocols. To expand the applicability of these solutions to computationally weaker devices, it has been advocated that this operation is outsourced from a computationally weaker client to a computationally stronger server, possibly implemented in a cloud-based architecture. While preliminary solutions to this problem considered mostly honest servers, or multiple separated servers, some of which honest, solving this problem in the case of a single (logical), possibly malicious, server, has remained open since a formal cryptographic model was introduced in [20]. Several later attempts either failed to achieve privacy or only bounded by a constant the (security) probability that a cheating server convinces a client of an incorrect result. In this paper we solve this problem for a large class of cyclic groups, thus making our solutions applicable to many cryptosystems in the literature that are based on the hardness of the discrete logarithm problem or on related assumptions. Our main protocol satisfies natural correctness, security, privacy and efficiency requirements, where the security probability is exponentially small. In our main protocol, with very limited offline computation and server computation, the client can delegate an exponentiation to an exponent of the same length as a group element by performing an exponentiation to an exponent of short length (i.e., the length of a statistical parameter). We also show an extension protocol that further reduces client computation by a constant factor, while increasing offline computation and server computation by about the same factor.
In a software system it is possible to quantify the amount of information that is leaked or corrupted by analysing the flows of information present in the source code. In a cyber-physical system, information flows are not only present at the digital level but also at a physical level, and they are also present to and fro the two levels. In this work, we provide a methodology to formally analyse a composite, cyber-physical system model (combining physics and control) using an information flow-theoretic approach. We use this approach to quantify the level of vulnerability of a system with respect to attackers with different capabilities. We illustrate our approach by means of a water distribution case study.
Existing probabilistic privacy enforcement approaches permit the execution of a program that processes sensitive data only if the information it leaks is within the bounds specified by a given policy. Thus, to extract any information, users must manually design a program that satisfies the policy. In this work, we present a novel synthesis approach that automatically transforms a program into one that complies with a given policy. Our approach consists of two ingredients. First, we phrase the problem of determining the amount of leaked information as Bayesian inference, which enables us to leverage existing probabilistic programming engines. Second, we present two synthesis procedures that add uncertainty to the program's outputs as a way of reducing the amount of leaked information: an optimal one based on SMT solving and a greedy one with quadratic running time. We implemented and evaluated our approach on 10 representative programs from multiple application domains. We show that our system can successfully synthesize a permissive enforcement mechanism for all examples.
Cloud computing services have gained a lot of attraction in the recent years, but the shift of data from user-owned desktops and laptops to cloud storage systems has led to serious data privacy implications for the users. Even though privacy notices supplied by the cloud vendors details the data practices and options to protect their privacy, the lengthy and free-flowing textual format of the notices are often difficult to comprehend by the users. Thus we propose a simplified presentation format for privacy practices and choices termed as "Privacy-Dashboard" based on Protection Motivation Theory (PMT) and we intend to test the effectiveness of presentation format using cognitive-fit theory. Also, we indirectly model the cloud privacy concerns using Item-Response Theory (IRT) model. We contribute to the information privacy literature by addressing the literature gap to develop privacy protection artifacts in order to improve the privacy protection behaviors of individual users. The proposed "privacy dashboard" would provide an easy-to-use choice mechanisms that allow consumers to control how their data is collected and used.
This paper introduces the first state-based formalization of isolation guarantees. Our approach is premised on a simple observation: applications view storage systems as black-boxes that transition through a series of states, a subset of which are observed by applications. Defining isolation guarantees in terms of these states frees definitions from implementation-specific assumptions. It makes immediately clear what anomalies, if any, applications can expect to observe, thus bridging the gap that exists today between how isolation guarantees are defined and how they are perceived. The clarity that results from definitions based on client-observable states brings forth several benefits. First, it allows us to easily compare the guarantees of distinct, but semantically close, isolation guarantees. We find that several well-known guarantees, previously thought to be distinct, are in fact equivalent, and that many previously incomparable flavors of snapshot isolation can be organized in a clean hierarchy. Second, freeing definitions from implementation-specific artefacts can suggest more efficient implementations of the same isolation guarantee. We show how a client-centric implementation of parallel snapshot isolation can be more resilient to slowdown cascades, a common phenomenon in large-scale datacenters.
We present a binary static analysis approach to detect intelligent electronic device (IED) malware based on the time requirements of electrical substations. We explore graph theory techniques to model the timing performance of an IED executable. Timing performance is subsequently used as a metric for IED malware detection. More specifically, we perform a series of steps to reduce a part of the IED malware detection problem into a classical problem of graph theory, namely finding single-source shortest paths on a weighted directed acyclic graph (DAG). Shortest paths represent execution flows that take the longest time to compute. Their clock cycles are examined to determine if they violate the real-time nature of substation monitoring and control, in which case IED malware detection is attained. We did this work with particular reference to implementations of protection and control algorithms that use the IEC 61850 standard for substation data representation and network communication. We tested our approach against IED exploits and malware, network scanning code, and numerous malware samples involved in recent ICS malware campaigns.
This paper presents an approach to formalizing and enforcing a class of use privacy properties in data-driven systems. In contrast to prior work, we focus on use restrictions on proxies (i.e. strong predictors) of protected information types. Our definition relates proxy use to intermediate computations that occur in a program, and identify two essential properties that characterize this behavior: 1) its result is strongly associated with the protected information type in question, and 2) it is likely to causally affect the final output of the program. For a specific instantiation of this definition, we present a program analysis technique that detects instances of proxy use in a model, and provides a witness that identifies which parts of the corresponding program exhibit the behavior. Recognizing that not all instances of proxy use of a protected information type are inappropriate, we make use of a normative judgment oracle that makes this inappropriateness determination for a given witness. Our repair algorithm uses the witness of an inappropriate proxy use to transform the model into one that provably does not exhibit proxy use, while avoiding changes that unduly affect classification accuracy. Using a corpus of social datasets, our evaluation shows that these algorithms are able to detect proxy use instances that would be difficult to find using existing techniques, and subsequently remove them while maintaining acceptable classification performance.
In this paper, we propose principles of information control and sharing that support ORCON (ORiginator COntrolled access control) models while simultaneously improving components of confidentiality, availability, and integrity needed to inherently support, when needed, responsibility to share policies, rapid information dissemination, data provenance, and data redaction. This new paradigm of providing unfettered and unimpeded access to information by authorized users, while at the same time, making access by unauthorized users impossible, contrasts with historical approaches to information sharing that have focused on need to know rather than need to (or responsibility to) share.
Constraints such as separation-of-duty are widely used to specify requirements that supplement basic authorization policies. However, the existence of constraints (and authorization policies) may mean that a user is unable to fulfill her/his organizational duties because access to resources is denied. In short, there is a tension between the need to protect resources (using policies and constraints) and the availability of resources. Recent work on workflow satisfiability and resiliency in access control asks whether this tension compromises the ability of an organization to achieve its objectives. In this paper, we develop a new method of specifying constraints which subsumes much related work and allows a wider range of constraints to be specified. The use of such constraints leads naturally to a range of questions related to "policy existence", where a positive answer means that an organization's objectives can be realized. We provide an overview of our results establishing that some policy existence questions, notably for those instances that are restricted to user-independent constraints, are fixed-parameter tractable.
Two known shortcomings of standard techniques in formal verification are the limited capability to provide system-level assertions, and the scalability to large, complex models, such as those needed in Cyber-Physical Systems (CPS) applications. Leveraging data, which nowadays is becoming ever more accessible, has the potential to mitigate such limitations. However, this leads to a lack of formal proofs that are needed for modern safety-critical systems. This contribution presents a research initiative that addresses these shortcomings by bringing model-based techniques and data-driven methods together, which can help pushing the envelope of existing algorithms and tools in formal verification and thus expanding their applicability to complex engineering systems, such as CPS. In the first part of the contribution, we discuss a new, formal, measurement-driven and model-based automated technique, for the quantitative verification of physical systems with partly unknown dynamics. We formulate this setup as a data-driven Bayesian inference problem, formally embedded within a quantitative, model-based verification procedure. We argue that the approach can be applied to complex physical systems that are key for CPS applications, dealing with spatially continuous variables, evolving under complex dynamics, driven by external inputs, and accessed under noisy measurements. In the second part of the contribution, we concentrate on systems represented by models that evolve under probabilistic and heterogeneous (continuous/discrete - that is "hybrid" - as well as nonlinear) dynamics. Such stochastic hybrid models (also known as SHS) are a natural mathematical framework for CPS. With focus on model-based verification procedures, we provide algorithms for quantitative model checking of temporal specifications on SHS with formal guarantees. This is attained via the development of formal abstraction techniques that are based on quantitative approximations. Theory is complemented by algorithms, all packaged in software tools that are available to users, and which are applied here in the domain of Smart Energy.