Visible to the public Security Reasoning for Distributed Systems with Uncertainty (CMU/Cornell Collaborative Proposal) - July 2017Conflict Detection Enabled

Public Audience
Purpose: To highlight progress. Information is generally at a higher level which is accessible to the interested public.

PI(s): Andre Platzer
Co-PI(s):
Researchers: Dexter Kozen (Cornell)

1) HARD PROBLEM(S) ADDRESSED (with short descriptions)
This refers to Hard Problems, released November 2012.

Scalability and composability and Resilient architectures

How can we reason about security in large and uncertain environments? Our research focuses on methods for approximating optimization problems used in policy synthesis for systems with large state-spaces and uncertain dynamics.

2) PUBLICATIONS

None this quarter

3) KEY HIGHLIGHTS

This project is currently focused on approximate optimization techniques with specific application to security planning and policy setting. These techniques involve solving an optimization problem within a restricted basis of functions. This technique, called Galerkin approximation, allows problems to be solved rapidly but with some loss in solution quality. We are currently focused on applying these methods to policy synthesis question, but they can also be applied to anomaly detection problems. Many anomaly detection and policy synthesis questions can be cast as instances of a general problem called the linear complementarity problem (LCP) (Zawadzki, Gordon, and Platzer, A Projection Algorithm for Strictly Monotone Linear Complementarity Problems, 2013).

We are currently working on LCP instances based on Markov decision processes (MDPs). MDPs are a popular framework for making sequential decisions in uncertain environments. Synthesizing good security polices is an example of this kind of task. For example, we may want to automatically generate a good policy for when access control restrictions should be escalated in response to anomalous system behavior, while still allowing legitimate users to work productively. Our MDP approximation techniques will improve policy synthesis capability for many of these large security problems.

Our current work ties into our earlier work on #E-SAT solving (Zawadzki, Platzer, & Gordon, A Generalization of SAT and #SAT for Robust Policy Evaluation, 2013) which offered new capabilities to analyze the robustness of security policies under uncertainty. Together, these methods represent two important links in a chain of tools for generating and verifying policies in a number of different security applications that exhibit uncertainty. For example, one can first synthesize a policy using our MDP techniques, and then can analyze the policy's robustness to random failure using our #E-SAT methods.

4) COMMUNITY ENGAGEMENTS

N/A

5) EDUCATIONAL ADVANCES

N/A