Visible to the public Model-Based Explanation For Human-in-the-Loop Security - January 2023Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: David Garlan, Bradley Schmerl (CMU)

HARD PROBLEM(S) ADDRESSED
Human Behavior
Metrics
Resilient Architectures

We are addressing human behavior by providing understandable explanations for automated mitigation plans generated by self-protecting systems that use various models of the software, network, and attack. We are addressing resilience by providing defense plans that are automatically generated as the system runs and accounting for current context, system state, observable properties of the attacker, and potential observable operations of the defense.

PUBLICATIONS

Rebekka Wohlrab, Javier Camara, David Garlan and Bradley Schmerl. Explaining quality attribute tradeoffs in automated planning for self-adaptive systems. In The Journal of Systems and Software, October 2022.

Javier Camara, Rebekka Wohlrab, David Garlan and Bradley Schmerl. ExTrA: Explaining architectural design tradeoff spaces via dimensionality reduction. In Journal of Systems and Software, December 2022. https://doi.org/10.1016/j.jss.2022.111578.

PUBLIC ACCOMPLISHMENT HIGHLIGHTS

We completed publication of the material that we reported last quarter, as above: We have developed a framework that uses various statistical approaches commonly used in machine learning for simplifying explanations of plans made in large trade-off spaces. The approach combinds principle component analysis (PCA), decision trees, and classification to understand key factors in deciding which plans to choose. The approach can allow explanations to focus on factors that really impacted the choice of plan, reducing that amount of information and context a human would need to understand to comprehend an explanation. We have several publications about this currently under review.

COMMUNITY ENGAGEMENTS (If applicable)

EDUCATIONAL ADVANCES (If applicable)