Model-Based Explanation For Human-in-the-Loop Security - April 2020
PI(s), Co-PI(s), Researchers: David Garlan, Bradley Schmerl (CMU)
HARD PROBLEM(S) ADDRESSED
Human Behavior
Metrics
Resilient Architectures
We are addressing human behavior by providing understandable explanations for automated mitigation plans generated by self-protecting systems that use various models of the software, network, and attack. We are addressing resilience by providing defense plans that are automatically generated as the system runs and accounting for current context, system state, observable properties of the attacker, and potential observable operations of the defense.
PUBLICATIONS
None.
PUBLIC ACCOMPLISHMENT HIGHLIGHTS
Explanation is sometimes helpful to allow the human to understand why a system is making certain decisions. However, explanations come with costs in terms of, e.g., delayed actions, or the possibility that a human may make a bad judgement. Hence, it is not always obvious whether explanations will improve the satisfaction of system goals and, if so, when to provide them to a human We defined a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterized explanations in terms of their impact on a human operator's ability of engaging in adaptive actions. We leverage a probabilistic reasoning tool to determine when an explanation should be used as a tactic in an adaptation strategy in order to improve overall system utility. The approach is illustrated in a representative scenario for the application of adaptive news website in the context of potential denial-of-service attacks, that can be used to decide when an explanation should be provided, based on knowledge about a human operator's capability and the cost associated with generating an explanation.
COMMUNITY ENGAGEMENTS (If applicable)
EDUCATIONAL ADVANCES (If applicable)