Biblio
We understand a sociotechnical system as a microsociety in which autonomous parties interact with and about technical objects. We define governance as the administration of such a system by its participants. We develop an approach for governance based on a computational representation of norms. Our approach has the benefit of capturing stakeholder needs precisely while yielding adaptive resource allocation in the face of changes both in stakeholder needs and the environment. In current work, we are extending this approach to tackle some challenges in cybersecurity.
Extended abstract appearing in the IJCAI Journal Abstracts Track
We understand a sociotechnical system as a microsociety in which autonomous parties interact with and about technical objects. We define \emph{governance} as the administration of such a system by its participants. We develop an approach for governance based on a computational representation of norms. Our approach has the benefit of capturing stakeholder needs precisely while yielding adaptive resource allocation in the face of changes both in stakeholder needs and the environment. In current work, we are extending this approach to tackle some challenges in cybersecurity.
Extended abstract appearing in the IJCAI Journal Abstracts Track
We understand a sociotechnical system as a microsociety in which autonomous parties interact with and about technical objects. We define governance as the administration of such a system by its participants. We develop an approach for governance based on a computational representation of norms. Our approach has the benefit of capturing stakeholder needs precisely while yielding adaptive resource allocation in the face of changes both in stakeholder needs and the environment. In current work, we are extending this approach to tackle some challenges in cybersecurity.
Extended abstract appearing in the IJCAI Journal Abstracts Track
The quantity of personal data gathered by service providers via our daily activities continues to grow at a rapid pace. The sharing, and the subsequent analysis of, such data can support a wide range of activities, but concerns around privacy often prompt an organization to transform the data to meet certain protection models (e.g., k-anonymity or E-differential privacy). These models, however, are based on simplistic adversarial frameworks, which can lead to both under- and over-protection. For instance, such models often assume that an adversary attacks a protected record exactly once. We introduce a principled approach to explicitly model the attack process as a series of steps. Specically, we engineer a factored Markov decision process (FMDP) to optimally plan an attack from the adversary's perspective and assess the privacy risk accordingly. The FMDP captures the uncertainty in the adversary's belief (e.g., the number of identied individuals that match the de-identified data) and enables the analysis of various real world deterrence mechanisms beyond a traditional protection model, such as a penalty for committing an attack. We present an algorithm to solve the FMDP and illustrate its efficiency by simulating an attack on publicly accessible U.S. census records against a real identied resource of over 500,000 individuals in a voter registry. Our results demonstrate that while traditional privacy models commonly expect an adversary to attack exactly once per record, an optimal attack in our model may involve exploiting none, one, or more indiviuals in the pool of candidates, depending on context.
The safety, security, and resilience of international postal, shipping, and transportation critical infrastructure are vital to the global supply chain that enables worldwide commerce and communications. But security on an international scale continues to fail in the face of new threats, such as the discovery by Panamanian authorities of suspected components of a surface-to-air missile system aboard a North Korean-flagged ship in July 2013 [1].This reality calls for new and innovative approaches to critical infrastructure security. Owners and operators of critical postal, shipping, and transportation operations need new methods to identify, assess, and mitigate security risks and gaps in the most effective manner possible.
The split-cycle constant-period frequency modulation for flapping wing micro air vehicle control in two degrees of freedom has been proposed and its theoretical viability has been demonstrated in previous work. Further consecutive work on developing the split-cycle based physical control system has been targeted towards providing on-the-fly configurability of all the theoretically possible split-cycle wing control parameters with high fidelity on a physical Flapping Wing Micro Air Vehicle (FWMAV). Extending the physical vehicle and wing-level control modules developed previously, this paper provides the details of the FWMAV platform, that has been designed and assembled to aid other researchers interested in the design, development and analysis of high level flapping flight controllers. Additionally, besides the physical vehicle and the configurable control module, the platform provides numerous external communication access capabilities to conduct and validate various sensor fusion study for flapping flight control.
In this paper we present an approach to implement security as a Virtualized Network Function (VNF) that is implemented within a Software-Defined Infrastructure (SDI). We present a scalable, flexible, and seamless design for a Deep Packet Inspection (DPI) system for network intrusion detection and prevention. We discuss how our design introduces significant reductions in both capital and operational expenses (CAPEX and OPEX). As proof of concept, we describe an implementation for a modular security solution that uses the SAVI SDI testbed to first detect and then block an attack or to re-direct it to a honey-pot for further analysis. We discuss our testing methodology and provide measurement results for the test cases where an application faces various security attacks.
This paper considers a 2-player strategic game for network routing under link disruptions. Player 1 (defender) routes flow through a network to maximize her value of effective flow while facing transportation costs. Player 2 (attacker) simultaneously disrupts one or more links to maximize her value of lost flow but also faces cost of disrupting links. This game is strategically equivalent to a zero-sum game. Linear programming duality and the max-flow min-cut theorem are applied to obtain properties that are satisfied in any mixed Nash equilibrium. In any equilibrium, both players achieve identical payoffs. While the defender's expected transportation cost decreases in attacker's marginal value of lost flow, the attacker's expected cost of attack increases in defender's marginal value of effective flow. Interestingly, the expected amount of effective flow decreases in both these parameters. These results can be viewed as a generalization of the classical max-flow with minimum transportation cost problem to adversarial environments.
Cyber-physical systems combine data processing and physical interaction. Therefore, security in cyber-physical systems involves more than traditional information security. This paper surveys recent research on security in cloud-based cyber-physical systems. In addition, this paper especially analyzes the security issues in modern production devices and smart mobility services, which are examples of cyber-physical systems from different application domains.
This chapter describes triggered control approaches for the coordination of networked cyber-physical systems. Given the coverage of the other chapters of this book, our focus is on self-triggered control and a novel approach we term team-triggered control.
Design and testing of pacemaker is challenging because of the need to capture the interaction between the physical processes (e.g. voltage signal in cardiac tissue) and the embedded software (e.g. a pacemaker). At the same time, there is a growing need for design and certification methodologies that can provide quality assurance for the embedded software. We describe recent progress in simulation-based techniques that are capable of ensuring guaranteed coverage. Our methods employ discrep- ancy functions, which impose bounds on system dynamics, and proceed through iteratively constructing over-approximations of the reachable set of states. We are able to prove time bounded safety or produce counterexamples. We illustrate the techniques by analyzing a family of pacemaker designs against time duration requirements and synthesize safe parameter ranges. We conclude by outlining the potential uses of this technology to improve the safety of medical device designs.
Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.