Visible to the public A Human-Agent-Focused Approach to Security ModelingConflict Detection Enabled

Project Details

Performance Period

Jan 01, 2018 - Oct 01, 2020

Institution(s)

University of Illinois at Urbana-Champaign

Sponsor(s)

National Security Agency

Project URL


Ranked 76 out of 118 Group Projects in this group.
2526 related hits.

Although human users can greatly affect the security of systems intended to be resilient, we lack a detailed understanding of their motivations, decisions, and actions. The broad aim of this project is to provide a scientific basis and techniques for cybersecurity risk assessment. This is achieved through development of a general-purpose modeling and simulation approach for cybersecurity aspects of cyber-systems and of all human agents that interact with those systems. These agents include adversaries, defenders, and users. The ultimate goal is to generate quantitative metric results that will help system architects make better design decisions to achieve system resiliency. Prior work on modeling enterprise systems and their adversaries has shown the promise of such modeling abstractions and the feasibility of using them to study the behavior under cyber attack of a large class of systems. Our hypothesis is that to incorporate all human agents who interact with a system will create more realistic simulations and produce insights regarding fundamental questions about how to lower cybersecurity risk. System architects can leverage the results to build more resilient systems that are able to achieve their mission objectives despite attacks.

Examples of simulation results are time to compromise of information, time to loss of service, percent of time adversary has system access, and identification of the most common attack paths.
Examples of insights one may gain from a model that incorporates agents address questions such as:

  • How do technical improvements in prevention and detection countermeasures, weigh against improvements to attack attribution capabilities of the defender as perceived by the adversary? Technical improvements change system behavior, while attribution capabilities can change adversary behavior for a risk-averse adversary.
  • How do autonomous and human-initiated defenses compare in effectiveness and what factors impact this comparison?

Assumptions made during the system design process will be made explicit and auditable in the model, which will help bring a more scientific approach to a field that currently often relies on intuition and experience. The primary output of this research will be a well-developed security modeling formalism capable of realistically modeling different human agents in a system, implemented in a software tool, and a validation of both the formalism and the tool with two or more real-life case studies. We plan to make the implementation of the formalism and associated analysis tools freely available to academics to encourage adoption of the scientific methodology our formalism will provide for security modeling. Many academics and practitioners have recognized the need for models for computer security, as evidenced by the numerous publications on the topic. Such modeling approaches are a step in the right direction, but have their own sets of limitations, especially in the way they model the humans that interact with the cyber portion of the system. Some modeling approaches explicitly model only the adversary (e.g., attack trees), or model only one attacker/defender pair (e.g., attack-defense trees [50]). However, there exist some approaches for modeling multiple adversaries, defenders, and users in a system, e.g., [9] [93]. The existing methods are not in common use, for a number of reasons. Often, the models lack realism because of oversimplification, are tailored to narrow use cases, produce results that are difficult to interpret, or are difficult to use, among other problems. Our approach will aim to overcome those limitations.

We seek to develop a formalism that may be used to build realistic models of a cyber-system and the humans who interact with the system--adversaries, defenders, and users--to perform risk analysis as an aid to security architects faced with difficult design choices. We call this formalism a General Agent Model for the Evaluation of Security (GAMES). We define an agent to be a human who may perform some action in the cyber-system: an adversary, a defender, or a user. The formalism will enable the modular construction of individual state-based agent models, which may be composed into one model so the interaction among the adversaries, defenders, and users may be studied. Once constructed, this composed model may be executed or simulated. During the simulation, each individual adversary, defender, or user may use an algorithm or policy to decide what actions the agent will take to attempt to move the system to a state that is advantageous for that agent. The simulation will then probabilistically determine the outcome of each action, and update the state. Modelers will have the flexibility to specify how the agents will behave. The model execution will generate metrics that aid risk assessment and help the security analyst suggest appropriate defensive strategies. The model's results may be reproduced by re-executing the model, and the model's assumptions may be audited and improved upon by outside experts.