Biblio
Our position is that a key component of securing cyber-physical systems (CPS) is to develop a theory of accountability that encompasses both control and computing systems. We envision that a unified theory of accountability in CPS can be built on a foundation of causal information flow analysis. This theory will support design and analysis of mechanisms at various stages of the accountability regime: attack detection, responsibility-assignment (e.g., attack identification or localization), and corrective measures (e.g., via resilient control) As an initial step in this direction, we summarize our results on attack detection in control systems. We use the Kullback-Liebler (KL) divergence as a causal information flow measure. We then recover, using information flow analyses, a set of existing results in the literature that were previously proved using different techniques. These results cover passive detection, stealthy attack characterization, and active detection. This research direction is related to recent work on accountability in computational systems [1], [2], [3], [4]. We envision that by casting accountability theories in computing and control systems in terms of causal information flow, we can provide a common foundation to develop a theory for CPS that compose elements from both domains.
Interface-confinement is a common mechanism that secures untrusted code by executing it inside a sandbox. The sandbox limits (confines) the code's interaction with key system resources to a restricted set of interfaces. This practice is seen in web browsers, hypervisors, and other security-critical systems. Motivated by these systems, we present a program logic, called System M, for modeling and proving safety properties of systems that execute adversary-supplied code via interface-confinement. In addition to using computation types to specify effects of computations, System M includes a novel invariant type to specify the properties of interface-confined code. The interpretation of invariant type includes terms whose effects satisfy an invariant. We construct a step-indexed model built over traces and prove the soundness of System M relative to the model. System M is the first program logic that allows proofs of safety for programs that execute adversary-supplied code without forcing the adversarial code to be available for deep static analysis. System M can be used to model and verify protocols as well as system designs. We demonstrate the reasoning principles of System M by verifying the state integrity property of the design of Memoir, a previously proposed trusted computing system.
With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of Big Data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.