Visible to the public Model-Based Explanation For Human-in-the-Loop Security - October 2020Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: David Garlan, Bradley Schmerl (CMU)

HARD PROBLEM(S) ADDRESSED
Human Behavior
Metrics
Resilient Architectures

We are addressing human behavior by providing understandable explanations for automated mitigation plans generated by self-protecting systems that use various models of the software, network, and attack. We are addressing resilience by providing defense plans that are automatically generated as the system runs and accounting for current context, system state, observable properties of the attacker, and potential observable operations of the defense.

PUBLICATIONS

  • Roykrong Sukkerd, Reid Simmons and David Garlan. Tradeoff-Focused Contrastive Explanation for MDP Planning. In Proceedings of the 29th IEEE International Conference on Robot & Human Interactive Communication, Virtual, 31 August - 4 September 2020
  • Nianyu Li, Javier Camara, David Garlan and Bradley Schmerl. Reasoning about When to Provide Explanation for Human-in-the-loop Self-Adaptive Systems. In Proceedings of the 2020 IEEE Conference on Autonomic Computing and Self-organizing Systems (ACSOS), Washington, D.C., 19-23 August 2020.

PUBLIC ACCOMPLISHMENT HIGHLIGHTS

End-users' trust in automated agents is important as automated decision-making and planning is increasingly used in many aspects of people's lives. In real-world applications of planning, multiple optimization objectives are often involved. Thus, planning agents' decisions can involve complex tradeoffs among competing objectives. It can be difficult for the stakeholders to understand why an agent decides on a particular planning solution on the basis of its objective values. As a result, the users may not know whether the agent is making the right decisions, and may lack trust in it. We reported on an approach, based on contrastive explanation, that enables a multi-objective MDP planning agent to explain its decisions in a way that communicates its tradeoff rationale in terms of the domain-level concepts. We conducted a human subjects experiment to evaluate the effectiveness of our explanation approach. The results show that our approach significantly improves the users' understanding, and confidence in their understanding, of the tradeoff rationale of the planning agent

COMMUNITY ENGAGEMENTS (If applicable)

EDUCATIONAL ADVANCES (If applicable)