Model-Based Explanation For Human-in-the-Loop Security - July 2021
PI(s), Co-PI(s), Researchers: David Garlan, Bradley Schmerl (CMU)
HARD PROBLEM(S) ADDRESSED
Human Behavior
Metrics
Resilient Architectures
We are addressing human behavior by providing understandable explanations for automated mitigation plans generated by self-protecting systems that use various models of the software, network, and attack. We are addressing resilience by providing defense plans that are automatically generated as the system runs and accounting for current context, system state, observable properties of the attacker, and potential observable operations of the defense.
PUBLICATIONS
Rebekka Wohlrab and David Garlan. Defining Utility Functions for Multi-Stakeholder Self-Adaptive Systems. In Proceedings of the 27th International Working Conference on Requirements Engineering: Foundation for Software Quality, Essen, Germany (Virtual), 12-15 April 2021.
Danny Weyns, Bradley Schmerl, Masako Kishida, Alberto Leva, Marin Litoiu, Necmiye Ozay, Colin Paterson and Kenji Tei. Towards Better Adaptive Systems by Combining MAPE, Control Theory, and Machine Learning. In Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual, 17-24 May 2021.
Nianyu Li, Javier Camara, David Garlan, Bradley Schmerl and Zhi Jin. Hey! Preparing Humans to do Tasks in Self-adaptive Systems. In Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual, 18-21 May 2021. Awarded Best Student Paper.
David Garlan. The Unknown Unknowns are not Totally Unknown. In Proceedings of the 16th Symposium on Software Engineering for Adaptive and Self-Managing Systems, Virtual, 18-21 May 2021.
PUBLIC ACCOMPLISHMENT HIGHLIGHTS
Many self-adaptive systems benefit from human involvement, where human operators can complement the capabilities of systems (e.g., by supervising decisions, or performing adaptations and tasks involving physical changes that cannot be automated). However, insufficient preparation (e.g., lack of task context comprehension) may hinder the effectiveness of human involvement, especially when operators are unexpectedly interrupted to perform a new task. Preparatory notification of a task provided in advance can sometimes help human operators focus their attention on the forthcoming task and understand its context before task execution, hence improving effectiveness. Nevertheless, deciding when to use preparatory notification as a tactic is not obvious and entails considering different factors that include uncertainties induced by human operator behavior (who might ignore the notice message), human attributes (e.g., operator training level), and other information that refers to the state of the system and its environment. Informed by work in cognitive science on human attention and context management, we extended our formal framework to reason about the usage of preparatory notifications in self-adaptive systems involving human operators. Our framework characterizes the effects of managing attention via task notification in terms of task context comprehension. We also build on our framework to develop an automated probabilistic reasoning technique able to determine when and in what form a preparatory notification tactic should be used to optimize system goals.
COMMUNITY ENGAGEMENTS (If applicable)
EDUCATIONAL ADVANCES (If applicable)