Biblio
False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.
Design for Testability (DfT) techniques allow devices to be tested at various levels of the manufacturing process. Scan architecture is a dominantly used DfT technique, which supports a high level of fault coverage, observability and controllability. However, scan architecture can be used by hardware attackers to gain critical information stored within the device. The security threats due to an unrestricted access provided by scan architecture has to be addressed to ensure hardware security. In this work, a solution based on the Clock and Data Recovery (CDR) method has been presented to authenticate users and limit the access to the scan architecture to authorized users. As compared to the available solution the proposed method presents a robust performance and reduces the area overhead by more than 10%.
The following topics are dealt with: feature extraction; data mining; support vector machines; mobile computing; photovoltaic power systems; mean square error methods; fault diagnosis; natural language processing; control system synthesis; and Internet of Things.
The evolution of convolutional neural networks (CNNs) into more complex forms of organization, with additional layers, larger convolutions and increasing connections, established the state-of-the-art in terms of accuracy errors for detection and classification challenges in images. Moreover, as they evolved to a point where Gigabytes of memory are required for their operation, we have reached a stage where it becomes fundamental to understand how their inference capabilities can be impaired if data elements somehow become corrupted in memory. This paper introduces fault-injection in these systems by simulating failing bit-cells in hardware memories brought on by relaxing the 100% reliable operation assumption. We analyze the behavior of these networks calculating inference under severe fault-injection rates and apply fault mitigation strategies to improve on the CNNs resilience. For the MNIST dataset, we show that 8x less memory is required for the feature maps memory space, and that in sub-100% reliable operation, fault-injection rates up to 10-1 (with most significant bit protection) can withstand only a 1% error probability degradation. Furthermore, considering the offload of the feature maps memory to an embedded dynamic RAM (eDRAM) system, using technology nodes from 65 down to 28 nm, up to 73 80% improved power efficiency can be obtained.
This paper combines FMEA and n2 approaches in order to create a methodology to determine risks associated with the components of an underwater system. This methodology is based on defining the risk level related to each one of the components and interfaces that belong to a complex underwater system. As far as the authors know, this approach has not been reported before. The resulting information from the mentioned procedures is combined to find the system's critical elements and interfaces that are most affected by each failure mode. Finally, a calculation is performed to determine the severity level of each failure mode based on the system's critical elements.
Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.
The modular multilevel converter with series and parallel connectivity was shown to provide advantages in several industrial applications. Its reliability largely depends on the absence of failures in the power semiconductors. We propose and analyze a fault-diagnosis technique to identify shorted switches based on features generated through wavelet transform of the converter output and subsequent classification in support vector machines. The multi-class support vector machine is trained with multiple recordings of the output of each fault condition as well as the converter under normal operation. Simulation results reveal that the proposed method has high classification latency and high robustness. Except for the monitoring of the output, which is required for the converter control in any case, this method does not require additional module sensors.
This paper presents a novel sensor parameter fault diagnosis method for generally multiple-input multiple-output (MIMO) affine nonlinear systems based on adaptive observer. Firstly, the affine nonlinear systems are transformed into the particular systems via diffeomorphic transformation using Lie derivative. Then, based on the techniques of high-gain observer and adaptive estimation, an adaptive observer structure is designed with simple method for jointly estimating the states and the unknown parameters in the output equation of the nonlinear systems. And an algorithm of the fault estimation is derived. The global exponential convergence of the proposed observer is proved succinctly. Also the proposed method can be applied to the fault diagnosis of generally affine nonlinear systems directly by the reversibility of aforementioned coordinate transformation. Finally, a numerical example is presented to illustrate the efficiency of the proposed fault diagnosis scheme.
Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.
As a consequence of the recent development of situational awareness technologies for smart grids, the gathering and analysis of data from multiple sources offer a significant opportunity for enhanced fault diagnosis. In order to achieve improved accuracy for both fault detection and classification, a novel combined data analytics technique is presented and demonstrated in this paper. The proposed technique is based on a segmented approach to Bayesian modelling that provides probabilistic graphical representations of both electrical power and data communication networks. In this manner, the reliability of both the data communication and electrical power networks are considered in order to improve overall power system transmission line fault diagnosis.
In Energy Internet mode, a large number of alarm information is generated when equipment exception and multiple faults in large power grid, which seriously affects the information collection, fault analysis and delays the accident treatment for the monitors. To this point, this paper proposed a method for power grid monitoring to monitor and diagnose fault in real time, constructed the equipment fault logical model based on five section alarm information, built the standard fault information set, realized fault information optimization, fault equipment location, fault type diagnosis, false-report message and missing-report message analysis using matching algorithm. The validity and practicality of the proposed method by an actual case was verified, which can shorten the time of obtaining and analyzing fault information, accelerate the progress of accident treatment, ensure the safe and stable operation of power grid.
Transmission lines' monitoring systems produce a large amount of data that hinders faults diagnosis. For this reason, approaches that can acquire and automatically interpret the information coming from lines' monitoring are needed. Furthermore, human errors stemming from operator dependent real-time decision need to be reduced. In this paper a multiple faults diagnosis method to determine transmission lines' operating conditions is proposed. Different scenarios, including insulator chains contamination with different types and concentrations of pollutants were modeled by equivalents circuits. Their performance were characterized by leakage current (LC) measurements and related to specific fault modes. Features extraction's algorithm relying on the difference between normal and faulty conditions were used to define qualitative trends for the diagnosis of various fault modes.
This paper presents a solution to a multiple-model based stochastic active fault diagnosis problem over the infinite-time horizon. A general additive detection cost criterion is considered to reflect the objectives. Since the system state is unknown, the design consists of a perfect state information reformulation and optimization problem solution by approximate dynamic programming. An adaptive particle filter state estimation algorithm based on the efficient sample size is proposed to maintain the estimate quality while reducing computational costs. A reduction of information statistics of the state is carried out using non-resampled particles to make the solution feasible. Simulation results illustrate the effectiveness of the proposed design.