Biblio
Induction is a successful approach for verification of hardware and software systems. A common practice is to model a system using logical formulas, and then use a decision procedure to verify that some logical formula is an inductive safety invariant for the system. A key ingredient in this approach is coming up with the inductive invariant, which is known as invariant inference. This is a major difficulty, and it is often left for humans or addressed by sound but incomplete abstract interpretation. This paper is motivated by the problem of inductive invariants in shape analysis and in distributed protocols. This paper approaches the general problem of inferring first-order inductive invariants by restricting the language L of candidate invariants. Notice that the problem of invariant inference in a restricted language L differs from the safety problem, since a system may be safe and still not have any inductive invariant in L that proves safety. Clearly, if L is finite (and if testing an inductive invariant is decidable), then inferring invariants in L is decidable. This paper presents some interesting cases when inferring inductive invariants in L is decidable even when L is an infinite language of universal formulas. Decidability is obtained by restricting L and defining a suitable well-quasi-order on the state space. We also present some undecidability results that show that our restrictions are necessary. We further present a framework for systematically constructing infinite languages while keeping the invariant inference problem decidable. We illustrate our approach by showing the decidability of inferring invariants for programs manipulating linked-lists, and for distributed protocols.
Parallel and distributed systems rely on intricate protocols to manage shared resources and synchronize, i.e., to manage how many processes are in a particular state. Effective verification of such systems requires universally quantification to reason about parameterized state and cardinalities tracking sets of processes, messages, failures to adequately capture protocol logic. In this paper we present Tool, an automatic invariant synthesis method that integrates cardinality-based reasoning and universal quantification. The resulting increase of expressiveness allows Tool to verify, for the first time, a representative collection of intricate parameterized protocols.
The HTTPS ecosystem, including the SSL/TLS protocol, the X.509 public-key infrastructure, and their cryptographic libraries, is the standardized foundation of Internet Security. Despite 20 years of progress and extensions, however, its practical security remains controversial, as witnessed by recent efforts to improve its design and implementations, as well as recent disclosures of attacks against its deployments. The Everest project is a collaboration between Microsoft Research, INRIA, and the community at large that aims at modelling, programming, and verifying the main HTTPS components with strong machine-checked security guarantees, down to core system and cryptographic assumptions. Although HTTPS involves a relatively small amount of code, it requires efficient low-level programming and intricate proofs of functional correctness and security. To this end, we are also improving our verifications tools (F*, Dafny, Lean, Z3) and developing new ones. In my talk, I will present our project, review our experience with miTLS, a verified reference implementation of TLS coded in F*, and describe current work towards verified, secure, efficient HTTPS.
We develop a novel model-based hardware-in-the-loop (HIL) framework for optimising energy consumption of embedded software controllers. Controller and plant models are specified as networks of parameterised timed input/output automata and translated into executable code. The controller is encoded into the target embedded hardware, which is connected to a power monitor and interacts with the simulation of the plant model. The framework then generates a power consumption model that maps controller transitions to distributions over power measurements, and is used to optimise the timing parameters of the controller, without compromising a given safety requirement. The novelty of our approach is that we measure the real power consumption of the controller and use thus obtained data for energy optimisation. We employ timed Petri nets as an intermediate representation of the executable specification, which facilitates efficient code generation and fast simulations. Our framework uniquely combines the advantages of rigorous specifications with accurate power measurements and methods for online model estimation, thus enabling automated design of correct and energy-efficient controllers.
Dynamic taint analysis can be used as a defense against low-integrity data in applications with untrusted user interfaces. An important example is defense against XSS and injection attacks in programs with web interfaces. Data sanitization is commonly used in this context, and can be treated as a precondition for endorsement in a dynamic integrity taint analysis. However, sanitization is often incomplete in practice. We develop a model of dynamic integrity taint analysis for Java that addresses imperfect sanitization with an in-depth approach. To avoid false positives, results of sanitization are endorsed for access control (aka prospective security), but are tracked and logged for auditing and accountability (aka retrospective security). We show how this heterogeneous prospective/retrospective mechanism can be specified as a uniform policy, separate from code. We then use this policy to establish correctness conditions for a program rewriting algorithm that instruments code for the analysis. The rewriting itself is a model of existing, efficient Java taint analysis tools.
Cloud computing platforms are becoming increasingly prevalent and readily available nowadays, providing us alternative and economic services for resource-constrained clients to perform large-scale computation. In this work, we address the problem of secure outsourcing of large-scale nonnegative matrix factorization (NMF) to a cloud in a way that the client can verify the correctness of results with small overhead. The input matrix protection is achieved by a lightweight, permutation-based encryption mechanism. By exploiting the iterative nature of NMF computation, we propose a single-round verification strategy, which can be proved to be effective. Both theoretical and experimental results are given to demonstrate the superior performance of our scheme.
Modern mobile systems such as smartphones, tablets, and wearables contain a plethora of sensors such as camera, microphone, GPS, and accelerometer. Moreover, being mobile, these systems are with the user all the time, e.g., in user's purse or pocket. Therefore, mobile sensors can capture extremely sensitive and private information about the user including daily conversations, photos, videos, and visited locations. Such a powerful sensing capability raises important privacy concerns. To address these concerns, we believe that mobile systems must be equipped with trustworthy sensor notifications, which use indicators such as LED to inform the user unconditionally when the sensors are on. We present Viola, our design and implementation of trustworthy sensor notifications, in which we leverage two novel solutions. First, we deploy a runtime monitor in low-level system software, e.g., in the operating system kernel or in the hypervisor. The monitor intercepts writes to the registers of sensors and indicators, evaluates them against checks on sensor notification invariants, and rejects those that fail the checks. Second, we use formal verification methods to prove the functional correctness of the compilation of our invariant checks from a high-level language. We demonstrate the effectiveness of Viola on different mobile systems, such as Nexus 5, Galaxy Nexus, and ODROID XU4, and for various sensors and indicators, such as camera, microphone, LED, and vibrator. We demonstrate that Viola incurs almost no overhead to the sensor's performance and incurs only small power consumption overhead.
The premise of the SafeConfig'16 Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners. The workshop will convene a panel of experts to explore this concept. The topic will be discussed from three different perspectives. One perspective is that of the practitioner. We will explore whether active and resilient technologies are or are planned for deployment and whether the verification methodology affects that decision. The second perspective will be that of the research community. We will address the shortcomings of current approaches and the research directions needed to address the practitioner's concerns. The third perspective is that of the policy community. Specifically, we will explore the dynamics between technology, verification, and policy.
Enterprise networks today have highly diverse correctness requirements and relatively common performance objectives. As a result, preferred abstractions for enterprise networks are those which allow matching security and correctness specifications, while transparently managing performance. Existing SDN network management architectures, however, bundle correctness and performance as a single abstraction. We argue that this creates an SDN ecosystem that is unnecessarily hard to build, maintain and evolve. We advocate a separation of the diverse correctness abstractions from generic performance optimization, to enable easier evolution of SDN controllers and platforms. We propose Oreo, a first step towards a common and relatively transparent performance optimization layer for SDN. Oreo performs the optimization by first building a model that describes every flow in the network, and then performing network-wide, multi-objective optimization based on this model without disrupting higher level security and correctness.
Authors: Santhosh Prabhu, Mo Dong, Tong Meng, P. Brighten Godfrey, and Matthew Caesar
Security attacks often exploit flaws that are not anticipated in an abstract design, but are introduced inadvertently when high-level interactions in the design are mapped to low-level behaviors in the supporting platform. This paper proposes a multi-representational approach to security analysis, where models capturing distinct (but possibly overlapping) views of a system are automatically composed in order to enable an end-to-end analysis. This approach allows the designer to incrementally explore the impact of design decisions on security, and discover attacks that span multiple layers of the system. This paper describes Poirot, a prototype implementation of the approach, and reports on our experience on applying Poirot to detect previously unknown security flaws in publicly deployed systems.
Privacy analysis is essential in the society. Data privacy preservation for access control, guaranteed service in wireless sensor networks are important parts. In programs' verification, we not only consider about these kinds of safety and liveness properties but some security policies like noninterference, and observational determinism which have been proposed as hyper properties. Fairness is widely applied in verification for concurrent systems, wireless sensor networks and embedded systems. This paper studies verification and analysis for proving security-relevant properties and hyper properties by proposing deductive proof rules under fairness requirements (constraints).
Presented at NSA SoS Quarterly Meeting, July 2016 and November 2016
Modeling and evaluating the performance of large-scale wireless sensor networks (WSNs) is a challenging problem. The traditional method for representing the global state of a system as a cross product of the states of individual nodes in the system results in a state space whose size is exponential in the number of nodes. We propose an alternative way of representing the global state of a system: namely, as a probability mass function (pmf) which represents the fraction of nodes in different states. A pmf corresponds to a point in a Euclidean space of possible pmf values, and the evolution of the state of a system is represented by trajectories in this Euclidean space. We propose a novel performance evaluation method that examines all pmf trajectories in a dense Euclidean space by exploring only finite relevant portions of the space. We call our method Euclidean model checking. Euclidean model checking is useful both in the design phase—where it can help determine system parameters based on a specification—and in the evaluation phase—where it can help verify performance properties of a system. We illustrate the utility of Euclidean model checking by using it to design a time difference of arrival (TDoA) distance measurement protocol and to evaluate the protocol’s implementation on a 90-node WSN. To facilitate such performance evaluations, we provide a Markov model estimation method based on applying a standard statistical estimation technique to samples resulting from the execution of a system.
Verification algorithms for networks of nonlinear hybrid automata (HA) can aid us understand and control biological processes such as cardiac arrhythmia, formation of memory, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. First, it uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network A for constructing a low-dimensional model M. Simulations of both A and M are then used to compute the reach tubes for A. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells, where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with 5 cells (20 continuous variables, 295 locations) typically in less than 15 minutes for up to reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.
Design and testing of pacemaker is challenging because of the need to capture the interaction between the physical processes (e.g. voltage signal in cardiac tissue) and the embedded software (e.g. a pacemaker). At the same time, there is a growing need for design and certification methodologies that can provide quality assurance for the embedded software. We describe recent progress in simulation-based techniques that are capable of ensuring guaranteed coverage. Our methods employ discrep- ancy functions, which impose bounds on system dynamics, and proceed through iteratively constructing over-approximations of the reachable set of states. We are able to prove time bounded safety or produce counterexamples. We illustrate the techniques by analyzing a family of pacemaker designs against time duration requirements and synthesize safe parameter ranges. We conclude by outlining the potential uses of this technology to improve the safety of medical device designs.
Design-time analysis and verification of distributed real-time embedded systems necessitates the modeling of the time-varying performance of the network and comparing that to application requirements. Earlier work has shown how to build a system network model that abstracted away the network's physical medium and protocols which govern its access and multiplexing. In this work we show how to apply a network medium channel access protocol, such as Time-Division Multiple Access (TDMA), to our network analysis methods and use the results to show that the abstracted model without the explicit model of the protocol is valid.
Identity verification plays an important role in creating trust in the economic system. It can, and should, be done in a way that doesn't decrease individual privacy.
A key question that arises in rigorous analysis of cyberphysical systems under attack involves establishing whether or not the attacked system deviates significantly from the ideal allowed behavior. This is the problem of deciding whether or not the ideal system is an abstraction of the attacked system. A quantitative variation of this question can capture how much the attacked system deviates from the ideal. Thus, algorithms for deciding abstraction relations can help measure the effect of attacks on cyberphysical systems and to develop attack detection strategies. In this paper, we present a decision procedure for proving that one nonlinear dynamical system is a quantitative abstraction of another. Directly computing the reach sets of these nonlinear systems are undecidable in general and reach set over-approximations do not give a direct way for proving abstraction. Our procedure uses (possibly inaccurate) numerical simulations and a model annotation to compute tight approximations of the observable behaviors of the system and then uses these approximations to decide on abstraction. We show that the procedure is sound and that it is guaranteed to terminate under reasonable robustness assumptions.
- « first
- ‹ previous
- 1
- 2
- 3