Visible to the public Data-Driven Model-Based Decision-Making - October 2016Conflict Detection Enabled

Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.

PI(s): William Sanders, Masooda Bashir, David Nicol, and Aad Van Moorsel*

Researchers: Ken Keefe, Mohamad Noureddine, Charles Morriset* and Rob Cain* (*Newcastle Univ., UK)

HARD PROBLEM(S) ADDRESSED
This refers to Hard Problems, released November 2012.

  • Predictive Security Metrics - System security analysis requires a holistic approach that considers the behavior of non-human subsystem, bad actors or adversaries, and expected human participants such as users and system administrators. We are developing the HITOP modeling formalism to formally describe the behavior of human participants and how their decisions affect overall system performance and security. With this modeling methodology and the tool support we are developing, we will produce quantitative security metrics for cyber-human systems.
  • Human Behavior - Modeling and evaluating human behavior is challenging, but it is an imperative component in security analysis. Stochastic modeling serves as a good approximation of human behavior, but we intend to do more with the HITOP method, which considers a task based process modeling language that evaluates a human's opportunity, willingness, and capability to perform individual tasks in their daily behavior. Partnered with an effective data collection strategy to validate model parameters, we are working to provide a sound model of human behavior.

PUBLICATIONS
Papers published in this quarter as a result of this research. Include title, author(s), venue published/presented, and a short description or abstract. Identify which hard problem(s) the publication addressed. Papers that have not yet been published should be reported in region 2 below.

[1] John C. Mace, Charles Morisset, and Aad van Moorsel, "Modelling User Availability in Workflow Resiliency Analysis", Symposium and Bootcamp on the Science of Security (HotSoS 2015), Urbana, IL, April 21-22, 2015.

Abstract: Workflows capture complex operational processes and include security constraints limiting which users can perform which tasks. An improper security policy may prevent certain tasks being assigned and may force a policy violation. Deciding whether a valid user-task assignment exists for a given policy is known to be extremely complex, especially when considering user unavailability (known as the resiliency problem). Therefore tools are required that allow automatic evaluation of workflow resiliency. Modelling well defined workflows is fairly straightforward, however user availability can be modelled in multiple ways for the same workflow. Correct choice of model is a complex yet necessary concern as it has a major impact on the calculated resiliency. We describe a number of user availability models and their encoding in the model checker PRISM, used to evaluate resiliency. We also show how model choice can affect resiliency computation in terms of its value, memory and CPU time.

[2] John C. Mace, Charles Morisset, and Aad van Moorsel, "Impact of Policy Design on Workflow Resiliency Computation Time", Quantitative Evaluation of Systems (QEST 2015), Madrid, Spain, September 1-3, 2015.

Abstract: Workflows are complex operational processes that include security constraints restricting which users can perform which tasks. An improper user-task assignment may prevent the completion of the workflow, and deciding such an assignment at runtime is known to be complex, especially when considering user unavailability (known as the resiliency problem). Therefore, design tools are required that allow fast evaluation of workflow resiliency. In this paper, we propose a methodology for workflow designers to assess the impact of the security policy on computing the resiliency of a workflow. Our approach relies on encoding a workflow into the probabilistic model-checker PRISM, allowing its resiliency to be evaluated by solving a Markov Decision Process. We observe and illustrate that adding or removing some constraints has a clear impact on the resiliency computation time, and we compute the set of security constraints that can be artificially added to a security policy in order to reduce the computation time while maintaining the resiliency.

[3] John C. Mace, Charles Moisset, and Aad van Moorsel, "Resiliency Variance in Workflows with Choice", International Workshop on Software Engineering for Resilient Systems (SERENCE 2015), Paris, France, September 7-8, 2015.

Abstract: Computing a user-task assignment for a workflow coming with probabilistic user availability provides a measure of completion rate or resiliency. To a workflow designer this indicates a risk of failure, especially useful for workflows which cannot be changed due to rigid security constraints. Furthermore, resiliency can help outline a mitigation strategy which states actions that can be performed to avoid workflow failures. A workflow with choice may have many different resiliency values, one for each of its execution paths. This makes understanding failure risk and mitigation requirements much more complex. We introduce resiliency variance, a new analysis metric for workflows which indicates volatility from the resiliency average. We suggest this metric can help determine the risk taken on by implementing a given workflow with choice. For instance, high average resiliency and low variance would suggest a low risk of workflow failure.

[4] Ken Keefe and William H. Sanders, "Reliability Analysis with Dynamic Reliability Block Diagrams in the Mobius Modeling Tool", 9th EAI International Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2015), Berlin, Germany, December 14-16, 2015.

Abstract: Reliability block diagram (RBD) models are a commonly used reliability analysis method. For static RBD models, combinatorial solution techniques are easy and efficient. However, static RBDs are limited in their ability to express varying system state, dependent events, and non-series-parallel topologies. A recent extension to RBDs, called Dynamic Reliability Block Diagrams (DRBD), has eliminated those limitations. This tool paper details the RBD implementation in the Mobius modeling framework and provides technical details for using RBDs independently or in composition with other Mobius modeling formalisms. The paper explains how the graphical front-end provides a user-friendly interface for specifying RBD models. The back-end implementation that interfaces with the Mobius AFI to dene and generate executable models that the Mobius tool uses to evaluate system metrics is also detailed.

ACCOMPLISHMENT HIGHLIGHTS

We continue with the problem of optimizing data collection for parameterized probabilistic models. Collecting data for parameters is costly and is often constrained therefore it is important to identify how data collection should be best spread across parameter data sources for accurate model checking. The optimization technique has to be model independent. Our objective is to identify the most optimum data collection strategy for a given model within an allowed budget.

We have developed and tested an extension for the probabilistic model checker PRISM we call Data Collection Optimization (DCO). DCO analyses various data collection strategies for a probabilistic model encoded in PRISM and finds the best strategy by using the optimization algorithm defined in [1]. The algorithm generates a set of strategies, which satisfy a given budget, each stating a number of samples to be collected from each of the model parameter data sources. Monte Carlo simulation is performed on the model with each of these strategies. The optimum strategy is identified using the variance method.

We have also developed and tested a PRISM extension, which analyzes the sensitivity of a model's input parameters. Sensitivity analysis of a probabilistic model's inputs is performed using the differential method. Using this extension a user can manually create data collection strategies which can be optimized using our DCO extension.