Visible to the public Model-Based Explanation For Human-in-the-Loop Security - July 2022Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: David Garlan, Bradley Schmerl (CMU)

HARD PROBLEM(S) ADDRESSED
Human Behavior
Metrics
Resilient Architectures

We are addressing human behavior by providing understandable explanations for automated mitigation plans generated by self-protecting systems that use various models of the software, network, and attack. We are addressing resilience by providing defense plans that are automatically generated as the system runs and accounting for current context, system state, observable properties of the attacker, and potential observable operations of the defense.

PUBLICATIONS

"Modeling and Analysis of Explanation for Secure Industrial Control Systems," Sridhar Adepu, Nianyu Li, Eunsuk Kang and David Garlan. Accepted for Publication to the ACM Transacations of Autonomous and Adaptive Systems, July 2022.

PUBLIC ACCOMPLISHMENT HIGHLIGHTS

Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator on the loop - i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an explanation can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator's capability to intervene and if necessarily, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgement. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, what kind of explanation should be provided to the operator.We define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We use a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used in order to improve overall system utility. We evaluated our explanation framework in the context of a realistic industrial control system with adaptive behaviors.

COMMUNITY ENGAGEMENTS (If applicable)

We are involved in organizing the Symposium on Software Engineering for Adaptive and Self-managing systems, which is the main conference in our area, and will be in Pittsburgh in May 2022.

EDUCATIONAL ADVANCES (If applicable)