A Monitoring, Fusion, and Response for Cyber Resilience - October 2021
PI: William Sanders
Researchers: Michael Rausch
HARD PROBLEM(S) ADDRESSED
This refers to Hard Problems, released November 2012.
Accounting for Human Behavior - Recognizing the influence of human actions on security outcomes, the aim of this project is to make fundamental advances in scientifically-motivated techniques to aid risk assessment for computer security through the development of a general-purpose, easy-to-use formalism that allows for realistic modeling of cyber systems and all human agents that interact with the system, including adversaries, defenders, and users, with the ultimate goal of generating quantitative results that will help system architects make better design decisions.
Our hypothesis is that models that incorporate all human agents who interact with the system will produce insightful metrics. System architects can leverage the results to build more resilient systems that are able to achieve their mission objectives despite attacks. We are particularly interested in performing uncertainty quantification and sensitivity analysis of cyber security models by using specially constructed metamodels to validate cyber security models.
PUBLICATIONS
Papers written as a result of your research from the current quarter only.
M. Rausch and W.H. Sanders. Evaluating the Effectiveness of Metamodels in Emulating Quantitative Models. Proceedings of the International Conference on Quantiative Evaluating of SysTems (QEST), Paris, France, August 23-27, 2021.
Abstract: It is often prohibitively time-consuming to do sensitivity analysis, uncertainty quantification, and optimization with complex state-based quantitative models because each model execution or solution takes so long to complete, and many such executions are necessary to complete the analysis. One way to approach this problem is to use metamodels that emulate the behavior of the base model but run much faster. These metamodels may be automatically constructed using machine learning techniques, and then the relevant analysis may be conducted on the fast-running metamodel in place of the slow-running model.
In this work, we evaluate the effectiveness of several different types of metamodels in emulating seven publicly available PRISM and Mobius models. In our evaluation, we found that the metamodels are reasonably accurate and are several thousand times faster than the corresponding models they emulate. Furthermore, we find that stacking-based metamodels are significantly more accurate than state-of-the-practice metamodels. We show that metamodeling is a powerful and practical tool for modelers interested in understanding the behavior of their models, because it makes feasible analysis techniques that would otherwise take too long to run on the original models.
KEY HIGHLIGHTS
Each effort should submit one or two specific highlights. Each item should include a paragraph or two along with a citation if available. Write as if for the general reader of IEEE S&P.
The purpose of the highlights is to give our immediate sponsors a body of evidence that the funding they are providing (in the framework of the SoS lablet model) is delivering results that "more than justify" the investment they are making.
This quarter we presented our findings regarding the applicability of metamodeling at QEST 2021, and our work was published in the conference proceedings. We received valuable feedback from the conference participants, and the audience also seemed to appreciate our work. We developed a plan to make the metamodeling tool more widely available. The last graduate student working on this project just graduated, so this will be the final report for this project.
COMMUNITY ENGAGEMENTS
No community engagements this quarter.
EDUCATIONAL ADVANCES:
None to report.