Biblio
Probabilistic model checking is a useful technique for specifying and verifying properties of stochastic systems including randomized protocols and reinforcement learning models. However, these methods rely on the assumed structure and probabilities of certain system transitions. These assumptions may be incorrect, and may even be violated by an adversary who gains control of some system components. In this paper, we develop a formal framework for adversarial robustness in systems modeled as discrete time Markov chains (DTMCs). We base our framework on existing methods for verifying probabilistic temporal logic properties and extend it to include deterministic, memoryless policies acting in Markov decision processes (MDPs). Our framework includes a flexible approach for specifying structure-preserving and non structure-preserving adversarial models. We outline a class of threat models under which adversaries can perturb system transitions, constrained by an ε ball around the original transition probabilities. We define three main DTMC adversarial robustness problems: adversarial robustness verification, maximal δ synthesis, and worst case attack synthesis. We present two optimization-based solutions to these three problems, leveraging traditional and parametric probabilistic model checking techniques. We then evaluate our solutions on two stochastic protocols and a collection of Grid World case studies, which model an agent acting in an environment described as an MDP. We find that the parametric solution results in fast computation for small parameter spaces. In the case of less restrictive (stronger) adversaries, the number of parameters increases, and directly computing property satisfaction probabilities is more scalable. We demonstrate the usefulness of our definitions and solutions by comparing system outcomes over various properties, threat models, and case studies.
Due to the critical importance of Industrial Control Systems (ICS) to the operations of cities and countries, research into the security of critical infrastructure has become increasingly relevant and necessary. As a component of both the research and application sides of smart city development, accurate and precise modeling, simulation, and verification are key parts of a robust design and development tools that provide critical assistance in the prevention, detection, and recovery from abnormal behavior in the sensors, controllers, and actuators which make up a modern ICS system. However, while these tools have potential, there is currently a need for helper-tools to assist with their setup and configuration, if they are to be utilized widely. Existing state-of-the-art tools are often technically complex and difficult to customize for any given IoT/ICS processes. This is a serious barrier to entry for most technicians, engineers, researchers, and smart city planners, while slowing down the critical aspects of safety and security verification. To remedy this issue, we take a case study of existing simulation toolkits within the field of water management and expand on existing tools and algorithms with simplistic automated retrieval functionality using a much more in-depth and usable customization interface to accelerate simulation scenario design and implementation, allowing for customization of the cyber-physical network infrastructure and cyber attack scenarios. We additionally provide a novel in-tool-assessment of network’s resilience according to graph theory path diversity. Further, we lay out a roadmap for future development and application of the proposed tool, including expansions on resiliency and potential vulnerability model checking, and discuss applications of our work to other fields relevant to the design and operation of smart cities.
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and can detect problems that the system is unaware of. One way of achieving this is by placing the human operator on the loop – i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, explanation is sometimes helpful to allow the human to understand why the system is making certain decisions and calibrate confidence from the human perspective. However, explanations come with costs in terms of delayed actions and the possibility that a human may make a bad judgement. Hence, it is not always obvious whether explanations will improve overall utility and, if so, what kinds of explanation to provide to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic adaptation approach that leverages a probabilistic reasoning technique to determine when the explanation should be used in order to improve overall system utility.
Industrial control systems are moving from monolithic to distributed and cloud-connected architectures, which increases system complexity and vulnerability, thus complicates security analysis. When exhaustive verification accounts for this complexity the state space being sought grows drastically as the system model evolves and more details are considered. Eventually this may lead to state space explosion, which makes exhaustive verification infeasible. To address this, we use VDM-SL's combinatorial testing feature to generate security attacks that are executed against the model to verify whether the system has the desired security properties. We demonstrate our approach using a cloud-connected industrial control system that is responsible for performing safety-critical tasks and handling client requests sent to the control network. Although the approach is not exhaustive it enables verification of mitigation strategies for a large number of attacks and complex systems within reasonable time.