Visible to the public Model-Based Explanation For Human-in-the-Loop Security - January 2019Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: David Garlan, Bradley Schmerl (CMU)

HARD PROBLEM(S) ADDRESSED
Human Behavior
Metrics
Resilient Architectures

PUBLICATIONS

None.

PUBLIC ACCOMPLISHMENT HIGHLIGHTS

Planning for automated management of software systems often involve optimization of multiple objectives, of which various aspects of security are a main concern. There is a need for end-users and security administrators to understand the expected consequences of a planning solution, including the necessary tradeoffs made to reconcile any competing objectives. In the context of Markov decision process (MDP) planning, manually inspecting the solution policy and its value function to gain such understanding is infeasible due to the lack of domain semantics and concepts in which the end-users are interested. They also lack information about which, if any, of the objectives are conflicting in a problem instance, and what compromises had to be made. We further investigated an approach for generating automated explanation of a MDP policy that is based on: (i) describing the expected consequences of the policy in terms of domain-specific, human-concept values, and relating those values to the overall expected cost of the policy, and (ii) explaining any tradeoff by contrasting the policy to counterfactual solutions (i.e., alternative policies that were not generated as a solution) on the basis of their human-concept values and the corresponding costs. We demonstrate our approach on MDP problems with two different cost criteria, namely, the expected total-cost and average-cost criteria. Such an approach enhances resilient architectures by helping to explain and have stakeholders explore the decision making process that goes into automated planning for maintaining system resilience.

COMMUNITY ENGAGEMENTS (If applicable)

EDUCATIONAL ADVANCES (If applicable)