Biblio
In this paper, we propose a scheme for a resilient distributed consensus problem through a set of trusted nodes within the network. Currently, algorithms that solve resilient consensus problem demand networks to have high connectivity to overrule the effects of adversaries, or require nodes to have access to some non-local information. In our scheme, we incorporate the notion of trusted nodes to guarantee distributed consensus despite any number of adversarial attacks, even in sparse networks. A subset of nodes, which are more secured against the attacks, constitute a set of trusted nodes. It is shown that the network becomes resilient against any number of attacks whenever the set of trusted nodes form a connected dominating set within the network. We also study a relationship between trusted nodes and the network robustness. Simulations are presented to illustrate and compare our scheme with the existing ones.
Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of the proposed method in recovering non-skewed signals.
An aspect of database forensics that has not received much attention in the academic research community yet is the presence of database triggers. Database triggers and their implementations have not yet been thoroughly analysed to establish what possible impact they could have on digital forensic analysis methods and processes. Conventional database triggers are defined to perform automatic actions based on changes in the database. These changes can be on the data level or the data definition level. Digital forensic investigators might thus feel that database triggers do not have an impact on their work. They are simply interrogating the data and metadata without making any changes. This paper attempts to establish if the presence of triggers in a database could potentially disrupt, manipulate or even thwart forensic investigations. The database triggers as defined in the SQL standard were studied together with a number of database trigger implementations. This was done in order to establish what aspects might have an impact on digital forensic analysis. It is demonstrated in this paper that some of the current database forensic analysis methods are impacted by the possible presence of certain types of triggers in a database. Furthermore, it finds that the forensic interpretation and attribution processes should be extended to include the handling and analysis of database triggers if they are present in a database.
This paper proposes a cooperative continuous ant colony optimization (CCACO) algorithm and applies it to address the accuracy-oriented fuzzy systems (FSs) design problems. All of the free parameters in a zero- or first-order Takagi-Sugeno-Kang (TSK) FS are optimized through CCACO. The CCACO algorithm performs optimization through multiple ant colonies, where each ant colony is only responsible for optimizing the free parameters in a single fuzzy rule. The ant colonies cooperate to design a complete FS, with a complete parameter solution vector (encoding a complete FS) that is formed by selecting a subsolution component (encoding a single fuzzy rule) from each colony. Subsolutions in each ant colony are evolved independently using a new continuous ant colony optimization algorithm. In the CCACO, solutions are updated via the techniques of pheromone-based tournament ant path selection, ant wandering operation, and best-ant-attraction refinement. The performance of the CCACO is verified through applications to fuzzy controller and predictor design problems. Comparisons with other population-based optimization algorithms verify the superiority of the CCACO.
In this paper, we propose a remote password authentication scheme based on 3-D geometry with biometric value of a user. It is simple and practically useful and also a legal user can freely choose and change his password using smart card that contains some information. The security of the system depends on the points on the diagonal of a cuboid in 3D environment. Using biometric value makes the points more secure because the characteristics of the body parts cannot be copied or stolen.
The modern society increasingly relies on electrical service, which also brings risks of catastrophic consequences, e.g., large-scale blackouts. In the current literature, researchers reveal the vulnerability of power grids under the assumption that substations/transmission lines are removed or attacked synchronously. In reality, however, it is highly possible that such removals can be conducted sequentially. Motivated by this idea, we discover a new attack scenario, called the sequential attack, which assumes that substations/transmission lines can be removed sequentially, not synchronously. In particular, we find that the sequential attack can discover many combinations of substation whose failures can cause large blackout size. Previously, these combinations are ignored by the synchronous attack. In addition, we propose a new metric, called the sequential attack graph (SAG), and a practical attack strategy based on SAG. In simulations, we adopt three test benchmarks and five comparison schemes. Referring to simulation results and complexity analysis, we find that the proposed scheme has strong performance and low complexity.
Enhanced situational awareness is integral to risk management and response evaluation. Dynamic systems that incorporate both hard and soft data sources allow for comprehensive situational frameworks which can supplement physical models with conceptual notions of risk. The processing of widely available semi-structured textual data sources can produce soft information that is readily consumable by such a framework. In this paper, we augment the situational awareness capabilities of a recently proposed risk management framework (RMF) with the incorporation of soft data. We illustrate the beneficial role of the hard-soft data fusion in the characterization and evaluation of potential vessels in distress within Maritime Domain Awareness (MDA) scenarios. Risk features pertaining to maritime vessels are defined a priori and then quantified in real time using both hard (e.g., Automatic Identification System, Douglas Sea Scale) as well as soft (e.g., historical records of worldwide maritime incidents) data sources. A risk-aware metric to quantify the effectiveness of the hard-soft fusion process is also proposed. Though illustrated with MDA scenarios, the proposed hard-soft fusion methodology within the RMF can be readily applied to other domains.
The recent trend of mobile ad hoc network increases the ability and impregnability of communication between the mobile nodes. Mobile ad Hoc networks are completely free from pre-existing infrastructure or authentication point so that all the present mobile nodes which are want to communicate with each other immediately form the topology and initiates the request for data packets to send or receive. For the security perspective, communication between mobile nodes via wireless links make these networks more susceptible to internal or external attacks because any one can join and move the network at any time. In general, Packet dropping attack through the malicious node (s) is one of the possible attack in the mobile ad hoc network. This paper emphasized to develop an intrusion detection system using fuzzy Logic to detect the packet dropping attack from the mobile ad hoc networks and also remove the malicious nodes in order to save the resources of mobile nodes. For the implementation point of view Qualnet simulator 6.1 and Mamdani fuzzy inference system are used to analyze the results. Simulation results show that our system is more capable to detect the dropping attacks with high positive rate and low false positive.
In bound applications, the locations of events reportable by a device network have to be compelled to stay anonymous. That is, unauthorized observers should be unable to notice the origin of such events by analyzing the network traffic. The authors analyze 2 forms of downsides: Communication overhead and machine load problem. During this paper, the authors give a new framework for modeling, analyzing, and evaluating obscurity in device networks. The novelty of the proposed framework is twofold: initial, it introduces the notion of "interval indistinguishability" and provides a quantitative live to model obscurity in wireless device networks; second, it maps supply obscurity to the applied mathematics downside the authors showed that the present approaches for coming up with statistically anonymous systems introduce correlation in real intervals whereas faux area unit unrelated. The authors show however mapping supply obscurity to consecutive hypothesis testing with nuisance Parameters ends up in changing the matter of exposing non-public supply data into checking out associate degree applicable knowledge transformation that removes or minimize the impact of the nuisance data victimization sturdy cryptography algorithmic rule. By doing therefore, the authors remodeled the matter of analyzing real valued sample points to binary codes, that opens the door for committal to writing theory to be incorporated into the study of anonymous networks. In existing work, unable to notice unauthorized observer in network traffic. However this work in the main supported enhances their supply obscurity against correlation check, the most goal of supply location privacy is to cover the existence of real events.
Vehicle-to-grid (V2G), involving both charging and discharging of battery vehicles (BVs), enhances the smart grid substantially to alleviate peaks in power consumption. In a V2G scenario, the communications between BVs and power grid may confront severe cyber security vulnerabilities. Traditionally, authentication mechanisms are solely designed for the BVs when they charge electricity as energy customers. In this paper, we first show that, when a BV interacts with the power grid, it may act in one of three roles: 1) energy demand (i.e., a customer); 2) energy storage; and 3) energy supply (i.e., a generator). In each role, we further demonstrate that the BV has dissimilar security and privacy concerns. Hence, the traditional approach that only considers BVs as energy customers is not universally applicable for the interactions in the smart grid. To address this new security challenge, we propose a role-dependent privacy preservation scheme (ROPS) to achieve secure interactions between a BV and power grid. In the ROPS, a set of interlinked subprotocols is proposed to incorporate different privacy considerations when a BV acts as a customer, storage, or a generator. We also outline both centralized and distributed discharging operations when a BV feeds energy back into the grid. Finally, security analysis is performed to indicate that the proposed ROPS owns required security and privacy properties and can be a highly potential security solution for V2G networks in the smart grid. The identified security challenge as well as the proposed ROPS scheme indicates that role-awareness is crucial for secure V2G networks.
Preserving the availability and integrity of networked computing systems in the face of fast-spreading intrusions requires advances not only in detection algorithms, but also in automated response techniques. In this paper, we propose a new approach to automated response called the response and recovery engine (RRE). Our engine employs a game-theoretic response strategy against adversaries modeled as opponents in a two-player Stackelberg stochastic game. The RRE applies attack-response trees (ART) to analyze undesired system-level security events within host computers and their countermeasures using Boolean logic to combine lower level attack consequences. In addition, the RRE accounts for uncertainties in intrusion detection alert notifications. The RRE then chooses optimal response actions by solving a partially observable competitive Markov decision process that is automatically derived from attack-response trees. To support network-level multiobjective response selection and consider possibly conflicting network security properties, we employ fuzzy logic theory to calculate the network-level security metric values, i.e., security levels of the system's current and potentially future states in each stage of the game. In particular, inputs to the network-level game-theoretic response selection engine, are first fed into the fuzzy system that is in charge of a nonlinear inference and quantitative ranking of the possible actions using its previously defined fuzzy rule set. Consequently, the optimal network-level response actions are chosen through a game-theoretic optimization process. Experimental results show that the RRE, using Snort's alerts, can protect large networks for which attack-response trees have more than 500 nodes.
This paper addresses a robust methodology for developing a statistically sound, robust prognostic condition index and encapsulating this index as a series of highly accurate, transparent, human-readable rules. These rules can be used to further understand degradation phenomena and also provide transparency and trust for any underlying prognostic technique employed. A case study is presented on a wind turbine gearbox, utilising historical supervisory control and data acquisition (SCADA) data in conjunction with a physics of failure model. Training is performed without failure data, with the technique accurately identifying gearbox degradation and providing prognostic signatures up to 5 months before catastrophic failure occurred. A robust derivation of the Mahalanobis distance is employed to perform outlier analysis in the bivariate domain, enabling the rapid labelling of historical SCADA data on independent wind turbines. Following this, the RIPPER rule learner was utilised to extract transparent, human-readable rules from the labelled data. A mean classification accuracy of 95.98% of the autonomously derived condition was achieved on three independent test sets, with a mean kappa statistic of 93.96% reported. In total, 12 rules were extracted, with an independent domain expert providing critical analysis, two thirds of the rules were deemed to be intuitive in modelling fundamental degradation behaviour of the wind turbine gearbox.
The need for cyber security professionals continues to grow and education systems are responding in a variety of way. The US government has weighed in with two efforts, the NICE effort led by NIST and the CAE effort jointly led by NSA and DHS. Industry has unfilled needs and the CAE program is changing to meet both NICE and industry needs. This paper analyzes these efforts and examines several critical, yet unaddressed issues facing school programs as they adapt to new criteria and guidelines. Technical issues are easy to enumerate, yet it is the programmatic and student success factors that will define successful programs.
Decreasing the potential for catastrophic consequences poses a significant challenge for high-risk industries. Organizations are under many different pressures, and they are continuously trying to adapt to changing conditions and recover from disturbances and stresses that can arise from both normal operations and unexpected events. Reducing risks in complex systems therefore requires that organizations develop and enhance traits that increase resilience. Resilience provides a holistic approach to safety, emphasizing the creation of organizations and systems that are proactive, interactive, reactive, and adaptive. This approach relies on disciplines such as system safety and emergency management, but also requires that organizations develop indicators and ways of knowing when an emergency is imminent. A resilient organization must be adaptive, using hands-on activities and lessons learned efforts to better prepare it to respond to future disruptions. It is evident from the discussions of each of the traits of resilience, including their limitations, that there are no easy answers to reducing safety risks in complex systems. However, efforts to strengthen resilience may help organizations better address the challenges associated with the ever-increasing complexities of their systems.
Cyber-physical systems (CPS) can potentially benefit a wide array of applications and areas. Here, the authors look at some of the challenges surrounding CPS, and consider a feasible solution for creating a robust, secure, and cost-effective architecture.
Trusted Platform Module (TPM) has gained its popularity in computing systems as a hardware security approach. TPM provides the boot time security by verifying the platform integrity including hardware and software. However, once the software is loaded, TPM can no longer protect the software execution. In this work, we propose a dynamic TPM design, which performs control flow checking to protect the program from runtime attacks. The control flow checker is integrated at the commit stage of the processor pipeline. The control flow of program is verified to defend the attacks such as stack smashing using buffer overflow and code reuse. We implement the proposed dynamic TPM design in FPGA to achieve high performance, low cost and flexibility for easy functionality upgrade based on FPGA. In our design, neither the source code nor the Instruction Set Architecture (ISA) needs to be changed. The benchmark simulations demonstrate less than 1% of performance penalty on the processor, and an effective software protection from the attacks.
As information and communication networks are highly interconnected with the power grid, cyber security of the supervisory control and data acquisition (SCADA) system has become a critical issue in the power system. By intruding into the SCADA system via the remote access points, the attackers are able to eavesdrop critical data and reconfigure devices to trip the system breakers. The cyber attacks are able to impact the reliability of the power system through the SCADA system. In this paper, six cyber attack scenarios in the SCADA system are considered. A Bayesian attack graph model is used to evaluate the probabilities of successful cyber attacks on the SCADA system, which will result in breaker trips. A forced outage rate (FOR) model is proposed considering the frequencies of successful attacks on the generators and transmission lines. With increased FOR values resulted from the cyber attacks, the loss of load probabilities (LOLP) in reliability test system 79 (RTS79) are estimated. The results of the simulations demonstrate that the power system becomes less reliable as the frequency of successful attacks increases.
This paper deals with the robust H∞ cyber-attacks estimation problem for control systems under stochastic cyber-attacks and disturbances. The focus is on designing a H∞ filter which maximize the attack sensitivity and minimize the effect of disturbances. The design requires not only the disturbance attenuation, but also the residual to remain the attack sensitivity as much as possible while the effect of disturbance is minimized. A stochastic model of control system with stochastic cyber-attacks which satisfy the Markovian stochastic process is constructed. And we also present the stochastic attack models that a control system is possibly exposed to. Furthermore, applying H∞ filtering technique-based on linear matrix inequalities (LMIs), the paper obtains sufficient conditions that ensure the filtering error dynamic is asymptotically stable and satisfies a prescribed ratio between cyber-attack sensitivity and disturbance sensitivity. Finally, the results are applied to the control of a Quadruple-tank process (QTP) under a stochastic cyber-attack and a stochastic disturbance. The simulation results underline that the designed filters is effective and feasible in practical application.
Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing ℓ2,1-norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
It has gradually realized in the industry that the increasing complexity of cloud computing under interaction of technology, business, society and the like, instead of being simply solved depending on research on information technology, shall be explained and researched from a systematic and scientific perspective on the basis of theory and method of a complex adaptive system (CAS). This article, for basic problems in CAS theoretical framework, makes research on definition of an active adaptive agent constituting the cloud computing system, and proposes a service agent concept and basic model through commonality abstraction from two basic levels: cloud computing technology and business, thus laying a foundation for further development of cloud computing complexity research as well as for multi-agent based cloud computing environment simulation.
Documents such as the Geneva (1949) and Hague Conventions (1899 and 1907) that have clearly outlined the rules of engagement for warfare find themselves challenged by the presence of a new arena: cyber. Considering the potential nature of these offenses, operations taking place in the realm of cyber cannot simply be generalized as “cyber-warfare,” as they may also be acts of cyber-espionage, cyber-terrorism, cyber-sabaotge, etc. Cyber-attacks, such as those on Estonia in 2007, have begun to test the limits of NATO's Article 5 and the UN Charter's Article 2(4) against the use of force. What defines “force” as it relates to cyber, and what kind of response is merited in the case of uncertainty regarding attribution? In 2009, NATO's Cooperative Cyber Defence Centre of Excellence commissioned a group of experts to publish a study on the application of international law to cyber-warfare. This document, the Tallinn Manual, was published in 2013 as a non-binding exercise to stimulate discussion on the codification of international law on the subject. After analysis, this paper concludes that the Tallinn Manual classifies the 2010 Stuxnet attack on Iran's nuclear program as an illegal act of force. The purpose of this paper is the following: (1) to analyze the historical and technical background of cyber-warfare, (2) to evaluate the Tallinn Manual as it relates to the justification cyber-warfare, and (3) to examine the applicability of the Tallinn Manual in a case study of a historical example of a cyber-attacks.
Face-to-face negotiations always benefit if the interacting individuals trust each other. But trust is also important in online interactions, even for humans interacting with a computational agent. In this article, the authors describe a behavioral experiment to determine whether, by volunteering information that it need not disclose, a software agent in a multi-issue negotiation can alleviate mistrust in human counterparts who differ in their propensities to mistrust others. Results indicated that when cynical, mistrusting humans negotiated with an agent that proactively communicated its issue priority and invited reciprocation, there were significantly more agreements and better utilities than when the agent didn't volunteer such information. Furthermore, when the agent volunteered its issue priority, the outcomes for mistrusting individuals were as good as those for trusting individuals, for whom the volunteering of issue priority conferred no advantage. These findings provide insights for designing more effective, socially intelligent agents in online negotiation settings.