Visible to the public Security Reasoning for Distributed Systems with Uncertainty (CMU/Cornell Collaborative Proposal) - October 2015Conflict Detection Enabled

Public Audience
Purpose: To highlight progress. Information is generally at a higher level which is accessible to the interested public.

PI(s): Andre Platzer
Co-PI(s):
Researchers: Dexter Kozen (Cornell)

1) HARD PROBLEM(S) ADDRESSED (with short descriptions)
This refers to Hard Problems, released November 2012.

How can we reason about security in underspecified and uncertain environments? This project focuses on scalable optimization and proof techniques for both analyzing and synthesizing security policies. Methods focus on approximate reasoning that maintains rigorous bounds on answer quality.

2) PUBLICATIONS

N/A

3) KEY HIGHLIGHTS

This project has made two contributions that will help reasoning about security goals in uncertain systems. First, we described a new problem class called SAT (Zawadzki, Platzer, & Gordon, A Generalization of SAT and #SAT for Robust Policy Evaluation, 2013), which is a quantified generalization of SAT, one of the most important problems in computer science. Our extension includes both counting ( ) and search ( ) quantifiers. We demonstrated that a number of questions about the robustness and reliability of policies can be set up as SAT instances. We also gave a rigorous theoretical characterization of the problem's worst-case complexity. Furthermore, we designed an algorithm that, despite the problem class's formidable worst-case complexity, performed extremely well empirically. The empirical success was due to exploiting a type of structure that seems common in practice.

Our second contribution is ongoing work on a set of approximate optimization techniques. These techniques involve solving an optimization problem within a restricted basis of functions. This technique, called Galerkin approximation, allows problems to be solved rapidly but with some loss in solution quality. We are currently focused on applying these methods to anomaly detection problems, and are actively expanding our work into policy synthesis question. Many anomaly detection and policy synthesis questions can be cast as instances of a general problem called the linear complementarity problem (LCP) (Zawadzki, Gordon, & Platzer, A Projection Algorithm for Strictly Monotone Linear Complementarity Problems, 2013). We are experimenting using anomaly detection LCP instances based on one-class kernel support vector machines (SVMs). We are currently working on LCP instances based on Markov decision processes (MDPs). MDPs are a popular framework for making sequential decisions in an uncertain environment. Synthesizing good security polices is an example of these kind of policy. For example, we may want to automatically generate a good policy for when access control restrictions should be escalated in response to anomalous system behavior, while still allowing legitimate users to work productively.