Biblio
Poisoning attack in which an adversary misleads the learning process by manipulating its training set significantly affect the performance of classifiers in security applications. This paper proposed a robust learning method which reduces the influences of attack samples on learning. The sensitivity, defined as the fluctuation of the output with small perturbation of the input, in Localized Generalization Error Model (L-GEM) is measured for each training sample. The classifier's output on attack samples may be sensitive and inaccurate since these samples are different from other untainted samples. An import score is assigned to each sample according to its localized generalization error bound. The classifier is trained using a new training set obtained by resampling the samples according to their importance scores. RBFNN is applied as the classifier in experimental evaluation. The proposed model outperforms than the traditional one under the well-known label flip poisoning attacks including nearest-first and farthest-first flips attack.
Workflows capture complex operational processes and include security constraints limiting which users can perform which tasks. An improper security policy may prevent cer- tain tasks being assigned and may force a policy violation. Deciding whether a valid user-task assignment exists for a given policy is known to be extremely complex, especially when considering user unavailability (known as the resiliency problem). Therefore tools are required that allow automatic evaluation of workflow resiliency. Modelling well defined workflows is fairly straightforward, however user availabil- ity can be modelled in multiple ways for the same workflow. Correct choice of model is a complex yet necessary concern as it has a major impact on the calculated resiliency. We de- scribe a number of user availability models and their encod- ing in the model checker PRISM, used to evaluate resiliency. We also show how model choice can affect resiliency computation in terms of its value, memory and CPU time.
The use of multiple independent spanning trees (ISTs) for data broadcasting in networks provides a number of advantages, including the increase of fault-tolerance, bandwidth and security. Thus, the designs of multiple ISTs on several classes of networks have been widely investigated. In this paper, we give an algorithm to construct ISTs on enhanced hypercubes Qn,k, which contain folded hypercubes as a subclass. Moreover, we show that these ISTs are near optimal for heights and path lengths. Let D(Qn,k) denote the diameter of Qn,k. If n - k is odd or n - k ∈ {2; n}, we show that all the heights of ISTs are equal to D(Qn,k) + 1, and thus are optimal. Otherwise, we show that each path from a node to the root in a spanning tree has length at most D(Qn,k) + 2. In particular, no more than 2.15 percent of nodes have the maximum path length. As a by-product, we improve the upper bound of wide diameter (respectively, fault diameter) of Qn,k from these path lengths.
The safety, security, and resilience of international postal, shipping, and transportation critical infrastructure are vital to the global supply chain that enables worldwide commerce and communications. But security on an international scale continues to fail in the face of new threats, such as the discovery by Panamanian authorities of suspected components of a surface-to-air missile system aboard a North Korean-flagged ship in July 2013 [1].This reality calls for new and innovative approaches to critical infrastructure security. Owners and operators of critical postal, shipping, and transportation operations need new methods to identify, assess, and mitigate security risks and gaps in the most effective manner possible.
Computing a user-task assignment for a workflow coming with probabilistic user availability provides a measure of completion rate or resiliency. To a workflow designer this indicates a risk of failure, espe- cially useful for workflows which cannot be changed due to rigid security constraints. Furthermore, resiliency can help outline a mitigation strategy which states actions that can be performed to avoid workflow failures. A workflow with choice may have many different resiliency values, one for each of its execution paths. This makes understanding failure risk and mitigation requirements much more complex. We introduce resiliency variance, a new analysis metric for workflows which indicates volatility from the resiliency average. We suggest this metric can help determine the risk taken on by implementing a given workflow with choice. For instance, high average resiliency and low variance would suggest a low risk of workflow failure.
The integration of physical systems with distributed embedded computing and communication devices offers advantages on reliability, efficiency, and maintenance. At the same time, these embedded computers are susceptible to cyber-attacks that can harm the performance of the physical system, or even drive the system to an unsafe state; therefore, it is necessary to deploy security mechanisms that are able to automatically detect, isolate, and respond to potential attacks. Detection and isolation mechanisms have been widely studied for different types of attacks; however, automatic response to attacks has attracted considerably less attention. Our goal in this paper is to identify trends and recent results on how to respond and reconfigure a system under attack, and to identify limitations and open problems. We have found two main types of attack protection: i) preventive, which identifies the vulnerabilities in a control system and then increases its resiliency by modifying either control parameters or the redundancy of devices; ii) reactive, which responds as soon as the attack is detected (e.g., modifying the non-compromised controller actions).
Ever-growing performance of supercomputers nowadays brings demanding requirements of energy efficiency and resilience, due to rapidly expanding size and duration in use of the large-scale computing systems. Many application/architecture-dependent parameters that determine energy efficiency and resilience individually have causal effects with each other, which directly affect the trade-offs among performance, energy efficiency and resilience at scale. To enable high-efficiency management for large-scale High-Performance Computing (HPC) systems nowadays, quantitatively understanding the entangled effects among performance, energy efficiency, and resilience is thus required. While previous work focuses on exploring energy-saving and resilience-enhancing opportunities separately, little has been done to theoretically and empirically investigate the interplay between energy efficiency and resilience at scale. In this article, by extending the Amdahl’s Law and the Karp-Flatt Metric, taking resilience into consideration, we quantitatively model the integrated energy efficiency in terms of performance per Watt and showcase the trade-offs among typical HPC parameters, such as number of cores, frequency/voltage, and failure rates. Experimental results for a wide spectrum of HPC benchmarks on two HPC systems show that the proposed models are accurate in extrapolating resilience-aware performance and energy efficiency, and capable of capturing the interplay among various energy-saving and resilience factors. Moreover, the models can help find the optimal HPC configuration for the highest integrated energy efficiency, in the presence of failures and applied resilience techniques.
Cyber-physical systems (CPSs) are found in many applications such as power networks, manufacturing processes, and air and ground transportation systems. Maintaining security of these systems under cyber attacks is an important and challenging task, since these attacks can be erratic and thus difficult to model. Secure estimation problems study how to estimate the true system states when measurements are corrupted and/or control inputs are compromised by attackers. The authors in [1] proposed a secure estimation method when the set of attacked nodes (sensors, controllers) is fixed. In this paper, we extend these results to scenarios in which the set of attacked nodes can change over time. We formulate this secure estimation problem into the classical error correction problem [2] and we show that accurate decoding can be guaranteed under a certain condition. Furthermore, we propose a combined secure estimation method with our proposed secure estimator above and the Kalman Filter (KF) for improved practical performance. Finally, we demonstrate the performance of our method through simulations of two scenarios where an unmanned aerial vehicle is under adversarial attack.
This chapter describes triggered control approaches for the coordination of networked cyber-physical systems. Given the coverage of the other chapters of this book, our focus is on self-triggered control and a novel approach we term team-triggered control.
Design and testing of pacemaker is challenging because of the need to capture the interaction between the physical processes (e.g. voltage signal in cardiac tissue) and the embedded software (e.g. a pacemaker). At the same time, there is a growing need for design and certification methodologies that can provide quality assurance for the embedded software. We describe recent progress in simulation-based techniques that are capable of ensuring guaranteed coverage. Our methods employ discrep- ancy functions, which impose bounds on system dynamics, and proceed through iteratively constructing over-approximations of the reachable set of states. We are able to prove time bounded safety or produce counterexamples. We illustrate the techniques by analyzing a family of pacemaker designs against time duration requirements and synthesize safe parameter ranges. We conclude by outlining the potential uses of this technology to improve the safety of medical device designs.
Software structure analysis is crucial in software testing. Using complex network theory, we present a series of methods and build a two-layer network model for software analysis, including network metrics calculation and features extraction. Through identifying the critical functions and reused modules, we can reduce nearly 80% workload in software testing on average. Besides, the structure network shows some interesting features that can assist to understand the software more clearly.
This paper introduces a novel team-triggered algorithmic solution for a distributed optimal deployment problem involving a group of mobile sensors. Distributed self-triggered algorithms relieve the requirement of synchronous periodic communication among agents by providing opportunistic criteria for when communication should occur. However, these criteria are often conservative since worst-case scenarios must always be considered to ensure the monotonic evolution of a relevant objective function. Here we introduce a team-triggered algorithm that builds on the idea of `promises' among agents, allowing them to operate with better information about their neighbors when they are not communicating, over a dynamically changing graph. We analyze the correctness of the proposed strategy and establish the same convergence guarantees as a coordination algorithm that assumes perfect information at all times. The technical approach relies on tools from set-valued stability analysis, computational geometry, and event-based systems. Simulations illustrate our results.