Model-Based Explanation For Human-in-the-Loop Security - October 2021
PI(s), Co-PI(s), Researchers: David Garlan, Bradley Schmerl (CMU)
HARD PROBLEM(S) ADDRESSED
Human Behavior
Metrics
Resilient Architectures
We are addressing human behavior by providing understandable explanations for automated mitigation plans generated by self-protecting systems that use various models of the software, network, and attack. We are addressing resilience by providing defense plans that are automatically generated as the system runs and accounting for current context, system state, observable properties of the attacker, and potential observable operations of the defense.
PUBLICATIONS
- Danny Weyns, Tomas Bures, Radu Calinescu, Barnaby Craggs, John Fitzgerald, David Garlan, Bashar Nuseibeh, Liliana Pasquale, Awais Rashid, Ivan Ruchkin and Bradley Schmerl. Six Software Engineering Principles for Smarter Cyber-Physical Systems. In Proceedings of the Workshop on Self-Improving System Integration, 27 September 2021.
- Javier Camara, Mariana Silva, David Garlan and Bradley Schmerl. Explaining Architectural Design Tradeoff Spaces: a Machine Learning Approach. In Proceedings of the 15th European Conference on Software Architecture, Virtual (Originally, Vaxjo Sweden), 13-17 September 2021.
- Mohammed Alharbi, Shihong Huang and David Garlan. A Probabilistic Model for Personality Trait Focused Explainability. In Proceedings of the 4th international Workshop on Context-aware, Autonomous and Smart Architecture (CASA 2021), co-located with the 15th European Conference on Software Architecture, Virtual (Originally Vaxjo Sweden), 13-17 September 2021.
- Maria Casimiro, Paolo Romano, David Garlan, Gabriel A. Moreno, Eunsuk Kang and Mark Klein. Self-Adaptation for Machine Learning Based Systems. In Proceedings of the 1st International Workshop on Software Architecture and Machine Learning (SAML), Springer, Virtual, (Originally Vaxjo, Sweden), 14 September 2021.
- Changjian Zhang, Ryan Wagner, Pedro Orvalho, David Garlan, Vasco Manquinho, Ruben Martins and Eunsuk Kang. AlloyMax: Bringing Maximum Satisfaction to Relational Specifications. In The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) 2021, Virtual, 23-28 August 2021. Received a Distinguished Paper designation.
PUBLIC ACCOMPLISHMENT HIGHLIGHTS
In software design, guaranteeing the correctness of run-time system behavior while achieving an acceptable balance among multiple quality attributes remains a challenging problem. Moreover, providing guarantees about the satisfaction of those requirements when systems are subject to uncertain environments is even more challenging. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches are still limited because they do not explicitly link design decisions to satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to distinguish the forest through the trees. We developed an approach to analyzing architectural design spaces that addresses these limitations and provides a basis to enable the explainability of design tradeoffs. Our approach combines dimensionality reduction techniques employed in machine learning pipelines with quantitative verification to enable architects to understand how design decisions contribute to the satisfaction of strict quantitative guarantees under uncertainty across the design space. Our results show feasibility of the approach in two case studies and evidence that dimensionality reduction is a viable approach to facilitate comprehension of tradeoffs in poorly-understood design spaces. This is foundational work that, while focused on software design, is also applicable to explaining run-time decisions when the decision space of possible actions is large, by focusing on the key elements that influence the decision made. [Camara et. al 2021]
- Giving humans the right amount of explanation at the right time is an important factor in maximizing the effective collaboration between an adaptive system and humans during interaction. However, explanations come with costs, such as the required time of explanation and humans' response time. Hence it is not always clear whether explanations will improve overall system utility and, if so, how the system should effectively provide explanation to humans, particularly given that different humans may benefit from different amounts and frequency of explanation. To provide a partial basis for making such decisions, we define a formal framework that incorporates human personality traits as one of the important elements in guiding automated decision- making about the proper amount of explanation that should be given to the human to improve the overall system utility. Specifically, we use probabilistic model analysis to determine how to utilize explanations in an effective way. To illustrate our approach, Grid - a virtual human and system interaction game -- was developed to represent scenarios for human-systems collaboration and to demonstrate how a human's personality traits can be used as a factor to consider for systems in providing appropriate explanations. [Alharbi 2021]
COMMUNITY ENGAGEMENTS (If applicable)
EDUCATIONAL ADVANCES (If applicable)