Biblio
Testing which is an indispensable part of software engineering is itself an art and science which emerged as a discipline over a period. On testing, if defects are found, testers diminish the risk by providing the awareness of defects and solutions to deal with them before release. If testing does not find any defects, testing assure that under certain conditions the system functions correctly. To guarantee that enough testing has been done, major risk areas need to be tested. We have to identify the risks, analyse and control them. We need to categorize the risk items to decide the extent of testing to be covered. Also, Implementation of structured metrics is lagging in software testing. Efficient metrics are necessary to evaluate, manage the testing process and make testing a part of engineering discipline. This paper proposes the usage of risk based testing using FMEA technique and provides an ideal set of metrics which provides a way to ensure effective testing process.
The internet of things (IoT) is the popular wireless network for data collection applications. The IoT networks are deployed in dense or sparse architectures, out of which the dense networks are vastly popular as these are capable of gathering the huge volumes of data. The collected data is analyzed using the historical or continuous analytical systems, which uses the back testing or time-series analytics to observe the desired patterns from the target data. The lost or bad interval data always carries the high probability to misguide the analysis reports. The data is lost due to a variety of reasons, out of which the most popular ones are associated with the node failures and connectivity holes, which occurs due to physical damage, software malfunctioning, blackhole/wormhole attacks, route poisoning, etc. In this paper, the work is carried on the new routing scheme for the IoTs to avoid the connectivity holes, which analyzes the activity of wireless nodes and takes the appropriate actions when required.
Despite the benefits offered by smart grids, energy producers, distributors and consumers are increasingly concerned about possible security and privacy threats. These threats typically manifest themselves at runtime as new usage scenarios arise and vulnerabilities are discovered. Adaptive security and privacy promise to address these threats by increasing awareness and automating prevention, detection and recovery from security and privacy requirements' failures at runtime by re-configuring system controls and perhaps even changing requirements. This paper discusses the need for adaptive security and privacy in smart grids by presenting some motivating scenarios. We then outline some research issues that arise in engineering adaptive security. We particularly scrutinize published reports by NIST on smart grid security and privacy as the basis for our discussions.
The paper describes modification of the ATA (Attack Tree Analysis) technique for assessment of instrumentation and control systems (ICS) dependability (reliability, availability and cyber security) called AvTA (Availability Tree Analysis). The techniques FMEA, FMECA and IMECA applied to carry out preliminary semi-formal and criticality oriented analysis before AvTA based assessment are described. AvTA models combine reliability and cyber security subtrees considering probabilities of ICS recovery in case of hardware (physical) and software (design) failures and attacks on components casing failures. Successful recovery events (SREs) avoid corresponding failures in tree using OR gates if probabilities of SRE for assumed time are more than required. Case for dependability AvTA based assessment (model, availability function and technology of decision-making for choice of component and system parameters) for smart building ICS (Building Automation Systems, BAS) is discussed.
Integrated cyber-physical systems (CPSs), such as the smart grid, are becoming the underpinning technology for major industries. A major concern regarding such systems are the seemingly unexpected large scale failures, which are often attributed to a small initial shock getting escalated due to intricate dependencies within and across the individual counterparts of the system. In this paper, we develop a novel interdependent system model to capture this phenomenon, also known as cascading failures. Our framework consists of two networks that have inherently different characteristics governing their intra-dependency: i) a cyber-network where a node is deemed to be functional as long as it belongs to the largest connected (i.e., giant) component; and ii) a physical network where nodes are given an initial flow and a capacity, and failure of a node results with redistribution of its flow to the remaining nodes, upon which further failures might take place due to overloading. Furthermore, it is assumed that these two networks are inter-dependent. For simplicity, we consider a one-to-one interdependency model where every node in the cyber-network is dependent upon and supports a single node in the physical network, and vice versa. We provide a thorough analysis of the dynamics of cascading failures in this interdependent system initiated with a random attack. The system robustness is quantified as the surviving fraction of nodes at the end of cascading failures, and is derived in terms of all network parameters involved. Analytic results are supported through an extensive numerical study. Among other things, these results demonstrate the ability of our model to capture the unexpected nature of large-scale failures, and provide insights on improving system robustness.
Power grid infrastructures have been exposed to several terrorists and cyber attacks from different perspectives and have resulted in critical system failures. Among different attack strategies, simultaneous attack is feasible for the attacker if enough resources are available at the moment. In this paper, vulnerability analysis for simultaneous attack is investigated, using a modified cascading failure simulator with reduced calculation time than the existing methods. A new damage measurement matrix is proposed with the loss of generation power and time to reach the steady-state condition. The combination of attacks that can result in a total blackout in the shortest time are considered as the strongest simultaneous attack for the system from attacker's viewpoint. The proposed approach can be used for general power system test cases. In this paper, we conducted the experiments on W&W 6 bus system and IEEE 30 bus system for demonstration of the result. The modified simulator can automatically find the strongest attack combinations for reaching maximum damage in terms of generation power loss and time to reach black-out.
The previous consideration of power grid focuses on the power system itself, however, the recent work is aiming at both power grid and communication network, this coupling networks are firstly called as interdependent networks. Prior study on modeling interdependent networks always extracts main features from real networks, the model of network A and network B are completely symmetrical, both degree distribution in intranetwork and support pattern in inter-network, but in reality this circumstance is hard to attain. In this paper, we deliberately set both networks with same topology in order to specialized research the support pattern between networks. In terms of initial failure from power grid or communication network, we find the remaining survival fraction is greatly disparate, and the failure initially from power grid is more harmful than failure initially from communication network, which all show the vulnerability of interdependency and meantime guide us to pay more attention to the protection measures for power grid.
This paper combines FMEA and n2 approaches in order to create a methodology to determine risks associated with the components of an underwater system. This methodology is based on defining the risk level related to each one of the components and interfaces that belong to a complex underwater system. As far as the authors know, this approach has not been reported before. The resulting information from the mentioned procedures is combined to find the system's critical elements and interfaces that are most affected by each failure mode. Finally, a calculation is performed to determine the severity level of each failure mode based on the system's critical elements.
In this paper, we present an algorithm for estimating the state of the power grid following a cyber-physical attack. We assume that an adversary attacks an area by: (i) disconnecting some lines within that area (failed lines), and (ii) obstructing the information from within the area to reach the control center. Given the phase angles of the buses outside the attacked area under the AC power flow model (before and after the attack), the algorithm estimates the phase angles of the buses and detects the failed lines inside the attacked area. The novelty of our approach is the transformation of the line failures detection problem, which is combinatorial in nature, to a convex optimization problem. As a result, our algorithm can detect any number of line failures in a running time that is independent of the number of failures and is solely dependent on the size of the network. To the best of our knowledge, this is the first convex relaxation for the problem of line failures detection using phase angle measurements under the AC power flow model. We evaluate the performance of our algorithm in the IEEE 118- and 300-bus systems, and show that it estimates the phase angles of the buses with less that 1% error, and can detect the line failures with 80% accuracy for single, double, and triple line failures.
With Software Defined Networking (SDN) the control plane logic of forwarding devices, switches and routers, is extracted and moved to an entity called SDN controller, which acts as a broker between the network applications and physical network infrastructure. Failures of the SDN controller inhibit the network ability to respond to new application requests and react to events coming from the physical network. Despite of the huge impact that a controller has on the network performance as a whole, a comprehensive study on its failure dynamics is still missing in the state of the art literature. The goal of this paper is to analyse, model and evaluate the impact that different controller failure modes have on its availability. A model in the formalism of Stochastic Activity Networks (SAN) is proposed and applied to a case study of a hypothetical controller based on commercial controller implementations. In case study we show how the proposed model can be used to estimate the controller steady state availability, quantify the impact of different failure modes on controller outages, as well as the effects of software ageing, and impact of software reliability growth on the transient behaviour.
Interconnect opens are known to be one of the predominant defects in nanoscale technologies. Automatic test pattern generation for open faults is challenging, because of their rather unstable behavior and the numerous electrical parameters which need to be considered. Thus, most approaches try to avoid accurate modeling of all constraints like the influence of the aggressors on the open net and use simplified fault models in order to detect as many faults as possible or make assumptions which decrease both complexity and accuracy. Yet, this leads to the problem that not only generated tests may be invalidated but also the localization of a specific fault may fail - in case such a model is used as basis for diagnosis. Furthermore, most of the models do not consider the problem of oscillating behavior, caused by feedback introduced by coupling capacitances, which occurs in almost all designs. In [1], the Robust Enhanced Aggressor Victim Model (REAV) and in [2] an extension to address the problem of oscillating behavior were introduced. The resulting model does not only consider the influence of all aggressors accurately but also guarantees robustness against oscillating behavior as well as process variations affecting the thresholds of gates driven by an open interconnect. In this work we present the first diagnostic classification algorithm for this model. This algorithm considers all constraints enforced by the REAV model accurately - and hence handles unknown values as well as oscillating behavior. In addition, it allows to distinguish faults at the same interconnect and thus reducing the area that has to be considered for physical failure analysis. Experimental results show the high efficiency of the new method handling circuits with up to 500,000 non-equivalent faults and considerably increasing the diagnostic resolution.
Power networks can be modeled as networked structures with nodes representing the bus bars (connected to generator, loads and transformers) and links representing the transmission lines. In this manuscript we study cascaded failures in power networks. As network structures we consider IEEE 118 bus network and a random spatial model network with similar properties to IEEE 118 bus network. A maximum flow based model is used to find the central edges. We study cascaded failures triggered by both random and targeted attacks to the edges. In the targeted attack the edge with the maximum centrality value is disconnected from the network. A number of metrics including the size of the largest connected component, the number of failed edges, the average maximum flow and the global efficiency are studied as a function of capacity parameter (edge critical load is proportional to its capacity parameter and nominal centrality value). For each case we identify the critical capacity parameter by which the network shows resilient behavior against failures. The experiments show that one should further protect the network for a targeted attack as compared to a random failure.
Cascading failure is an intrinsic threat of power grid to cause enormous cost of society, and it is very challenging to be analyzed. The risk of cascading failure depends both on its probability and the severity of consequence. It is impossible to analyze all of the intrinsic attacks, only the critical and high probability initial events should be found to estimate the risk of cascading failure efficiently. To recognize the critical and high probability events, a cascading failure analysis model for power transmission grid is established based on complex network theory (CNT) in this paper. The risk coefficient of transmission line considering the betweenness, load rate and changeable outage probability is proposed to determine the initial events of power grid. The development tendency of cascading failure is determined by the network topology, the power flow and boundary conditions. The indicators of expected percentage of load loss and line cut are used to estimate the risk of cascading failure caused by the given initial malfunction of power grid. Simulation results from the IEEE RTS-79 test system show that the risk of cascading failure has close relations with the risk coefficient of transmission lines. The value of risk coefficient could be useful to make vulnerability assessment and to design specific action to reduce the topological weakness and the risk of cascading failure of power grid.