Biblio
This paper investigates closed-form expressions to evaluate the performance of the Compressive Sensing (CS) based Energy Detector (ED). The conventional way to approximate the probability density function of the ED test statistic invokes the central limit theorem and considers the decision variable as Gaussian. This approach, however, provides good approximation only if the number of samples is large enough. This is not usually the case in CS framework, where the goal is to keep the sample size low. Moreover, working with a reduced number of measurements is of practical interest for general spectrum sensing in cognitive radio applications, where the sensing time should be sufficiently short since any time spent for sensing cannot be used for data transmission on the detected idle channels. In this paper, we make use of low-complexity approximations based on algebraic transformations of the one-dimensional Gaussian Q-function. More precisely, this paper provides new closed-form expressions for accurate evaluation of the CS-based ED performance as a function of the compressive ratio and the Signal-to-Noise Ratio (SNR). Simulation results demonstrate the increased accuracy of the proposed equations compared to existing works.
Intrusion detection systems do not perform well when it comes to detecting zero-day attacks, therefore improving their performance in that regard is an active research topic. In this study, to detect zero-day attacks with high accuracy, we proposed two deep learning based anomaly detection models using autoencoder and denoising autoencoder respectively. The key factor that directly affects the accuracy of the proposed models is the threshold value which was determined using a stochastic approach rather than the approaches available in the current literature. The proposed models were tested using the KDDTest+ dataset contained in NSL-KDD, and we achieved an accuracy of 88.28% and 88.65% respectively. The obtained results show that, as a singular model, our proposed anomaly detection models outperform any other singular anomaly detection methods and they perform almost the same as the newly suggested hybrid anomaly detection models.
In this study, it is proposed to carry out an efficient formulation in order to figure out the stochastic security-constrained generation capacity expansion planning (SC-GCEP) problem. The main idea is related to directly compute the line outage distribution factors (LODF) which could be applied to model the N - m post-contingency analysis. In addition, the post-contingency power flows are modeled based on the LODF and the partial transmission distribution factors (PTDF). The post-contingency constraints have been reformulated using linear distribution factors (PTDF and LODF) so that both the pre- and post-contingency constraints are modeled simultaneously in the SC-GCEP problem using these factors. In the stochastic formulation, the load uncertainty is incorporated employing a two-stage multi-period framework, and a K - means clustering technique is implemented to decrease the number of load scenarios. The main advantage of this methodology is the feasibility to quickly compute the post-contingency factors especially with multiple-line outages (N - m). This concept would improve the security-constraint analysis modeling quickly the outage of m transmission lines in the stochastic SC-GCEP problem. It is carried out several experiments using two electrical power systems in order to validate the performance of the proposed formulation.
This study proposes to apply an efficient formulation to solve the stochastic security-constrained generation capacity expansion planning (GCEP) problem using an improved method to directly compute the generalized generation distribution factors (GGDF) and the line outage distribution factors (LODF) in order to model the pre- and the post-contingency constraints based on the only application of the partial transmission distribution factors (PTDF). The classical DC-based formulation has been reformulated in order to include the security criteria solving both pre- and post-contingency constraints simultaneously. The methodology also takes into account the load uncertainty in the optimization problem using a two-stage multi-period model, and a clustering technique is used as well to reduce load scenarios (stochastic problem). The main advantage of this methodology is the feasibility to quickly compute the LODF especially with multiple-line outages (N-m). This idea could speed up contingency analyses and improve significantly the security-constrained analyses applied to GCEP problems. It is worth to mentioning that this approach is carried out without sacrificing optimality.
Support vector machines (SVMs) have been widely used for classification in machine learning and data mining. However, SVM faces a huge challenge in large scale classification tasks. Recent progresses have enabled additive kernel version of SVM efficiently solves such large scale problems nearly as fast as a linear classifier. This paper proposes a new accelerated mini-batch stochastic gradient descent algorithm for SVM classification with additive kernel (AK-ASGD). On the one hand, the gradient is approximated by the sum of a scalar polynomial function for each feature dimension; on the other hand, Nesterov's acceleration strategy is used. The experimental results on benchmark large scale classification data sets show that our proposed algorithm can achieve higher testing accuracies and has faster convergence rate.
This paper establishes a probability model of multiple paths scheme of quantum key distribution with public nodes among a set of paths which are used to transmit the key between the source node and the destination node. Then in order to be used in universal net topologies, combining with the key routing in the QKD network, the algorithm of the multiple paths scheme of key distribution we propose includes two major aspects: one is an approach which can confirm the number and the distance of the selection of paths, and the other is the strategy of stochastic paths with labels that can decrease the number of public nodes and avoid the phenomenon that the old scheme may produce loops and often get the nodes apart from the destination node father than current nodes. Finally, the paper demonstrates the rationality of the probability model and strategies about the algorithm.
Deception has been widely considered in literature as an effective means of enhancing security protection when the defender holds some private information about the ongoing rivalry unknown to the attacker. However, most of the existing works on deception assume static environments and thus consider only myopic deception, while practical security games between the defender and the attacker may happen in dynamic scenarios. To better exploit the defender's private information in dynamic environments and improve security performance, a stochastic deception game (SDG) framework is developed in this work to enable the defender to conduct foresighted deception. To solve the proposed SDG, a new iterative algorithm that is provably convergent is developed. A corresponding learning algorithm is developed as well to facilitate the defender in conducting foresighted deception in unknown dynamic environments. Numerical results show that the proposed foresighted deception can offer a substantial performance improvement as compared to the conventional myopic deception.
Organizations face the issue of how to best allocate their security resources. Thus, they need an accurate method for assessing how many new vulnerabilities will be reported for the operating systems (OSs) they use in a given time period. Our approach consists of clustering vulnerabilities by leveraging the text information within vulnerability records, and then simulating the mean value function of vulnerabilities by relaxing the monotonic intensity function assumption, which is prevalent among the studies that use software reliability models (SRMs) and nonhomogeneous Poisson process (NHPP) in modeling. We applied our approach to the vulnerabilities of four OSs: Windows, Mac, IOS, and Linux. For the OSs analyzed in terms of curve fitting and prediction capability, our results, compared to a power-law model without clustering issued from a family of SRMs, are more accurate in all cases we analyzed.
Establishing a secret and reliable wireless communication is a challenging task that is of paramount importance. In this paper, we investigate the physical layer security of a legitimate transmission link between a user that assists an Intrusion Detection System (IDS) in detecting eavesdropping and jamming attacks in the presence of an adversary that is capable of conducting an eavesdropping or a jamming attack. The user is being faced by a challenge of whether to transmit, thus becoming vulnerable to an eavesdropping or a jamming attack, or to keep silent and consequently his/her transmission will be delayed. The adversary is also facing a challenge of whether to conduct an eavesdropping or a jamming attack that will not get him/her to be detected. We model the interactions between the user and the adversary as a two-state stochastic game. Explicit solutions characterize some properties while highlighting some interesting strategies that are being embraced by the user and the adversary. Results show that our proposed system outperform current systems in terms of communication secrecy.
Deregulated electricity markets rely on a two-settlement system consisting of day-ahead and real-time markets, across which electricity price is volatile. In such markets, locational marginal pricing is widely adopted to set electricity prices and manage transmission congestion. Locational marginal prices are vulnerable to measurement errors. Existing studies show that if the adversaries are omniscient, they can design profitable attack strategies without being detected by the residue-based bad data detectors. This paper focuses on a more realistic setting, in which the attackers have only partial and imperfect information due to their limited resources and restricted physical access to the grid. Specifically, the attackers are assumed to have uncertainties about the state of the grid, and the uncertainties are modeled stochastically. Based on this model, this paper offers a framework for characterizing the optimal stochastic guarantees for the effectiveness of the attacks and the associated pricing impacts.
In this paper, we propose a novel adaptive control architecture for addressing security and safety in cyber-physical systems subject to exogenous disturbances. Specifically, we develop an adaptive controller for time-invariant, state-dependent adversarial sensor and actuator attacks in the face of stochastic exogenous disturbances. We show that the proposed controller guarantees uniform ultimate boundedness of the closed-loop dynamical system in a mean-square sense. We further discuss the practicality of the proposed approach and provide a numerical example involving the lateral directional dynamics of an aircraft to illustrate the efficacy of the proposed adaptive control architecture.
This paper presents a solution to a multiple-model based stochastic active fault diagnosis problem over the infinite-time horizon. A general additive detection cost criterion is considered to reflect the objectives. Since the system state is unknown, the design consists of a perfect state information reformulation and optimization problem solution by approximate dynamic programming. An adaptive particle filter state estimation algorithm based on the efficient sample size is proposed to maintain the estimate quality while reducing computational costs. A reduction of information statistics of the state is carried out using non-resampled particles to make the solution feasible. Simulation results illustrate the effectiveness of the proposed design.
With Software Defined Networking (SDN) the control plane logic of forwarding devices, switches and routers, is extracted and moved to an entity called SDN controller, which acts as a broker between the network applications and physical network infrastructure. Failures of the SDN controller inhibit the network ability to respond to new application requests and react to events coming from the physical network. Despite of the huge impact that a controller has on the network performance as a whole, a comprehensive study on its failure dynamics is still missing in the state of the art literature. The goal of this paper is to analyse, model and evaluate the impact that different controller failure modes have on its availability. A model in the formalism of Stochastic Activity Networks (SAN) is proposed and applied to a case study of a hypothetical controller based on commercial controller implementations. In case study we show how the proposed model can be used to estimate the controller steady state availability, quantify the impact of different failure modes on controller outages, as well as the effects of software ageing, and impact of software reliability growth on the transient behaviour.
Game theory serves as a powerful tool for distributed optimization in multiagent systems in different applications. In this paper we consider multiagent systems that can be modeled as a potential game whose potential function coincides with a global objective function to be maximized. This approach renders the agents the strategic decision makers and the corresponding optimization problem the problem of learning an optimal equilibruim point in the designed game. In distinction from the existing works on the topic of payoff-based learning, we deal here with the systems where agents have neither memory nor ability for communication, and they base their decision only on the currently played action and the experienced payoff. Because of these restrictions, we use the methods of reinforcement learning, stochastic approximation, and learning automata extensively reviewed and analyzed in [3], [9]. These methods allow us to set up the agent dynamics that moves the game out of inefficient Nash equilibria and leads it close to an optimal one in both cases of discrete and continuous action sets.
Compressed sensing (CS) or compressive sampling deals with reconstruction of signals from limited observations/ measurements far below the Nyquist rate requirement. This is essential in many practical imaging system as sampling at Nyquist rate may not always be possible due to limited storage facility, slow sampling rate or the measurements are extremely expensive e.g. magnetic resonance imaging (MRI). Mathematically, CS addresses the problem for finding out the root of an unknown distribution comprises of unknown as well as known observations. Robbins-Monro (RM) stochastic approximation, a non-parametric approach, is explored here as a solution to CS reconstruction problem. A distance based linear prediction using the observed measurements is done to obtain the unobserved samples followed by random noise addition to act as residual (prediction error). A spatial domain adaptive Wiener filter is then used to diminish the noise and to reveal the new features from the degraded observations. Extensive simulation results highlight the relative performance gain over the existing work.
Feedback loss can severely degrade the overall system performance, in addition, it can affect the control and computation of the Cyber-physical Systems (CPS). CPS hold enormous potential for a wide range of emerging applications including stochastic and time-critical traffic patterns. Stochastic data has a randomness in its nature which make a great challenge to maintain the real-time control whenever the data is lost. In this paper, we propose a data recovery scheme, called the Efficient Temporal and Spatial Data Recovery (ETSDR) scheme for stochastic incomplete feedback of CPS. In this scheme, we identify the temporal model based on the traffic patterns and consider the spatial effect of the nearest neighbor. Numerical results reveal that the proposed ETSDR outperforms both the weighted prediction (WP) and the exponentially weighted moving average (EWMA) algorithm regardless of the increment percentage of missing data in terms of the root mean square error, the mean absolute error, and the integral of absolute error.
Botnets are a well recognized global cyber-security threat as they enable attack communities to command large collections of compromised computers (bots) on-demand. Peer to-peer (P2P) distributed hash tables (DHT) have become particularly attractive botnet command and control (C & C) solutions due to the high level resiliency gained via the diffused random graph overlays they produce. The injection of Sybils, computers pretending to be valid bots, remains a key defensive strategy against DHT-structured P2P botnets. This research uses packet level network simulations to explore the relative merits of random, informed, and partially informed Sybil placement strategies. It is shown that random placements perform nearly as effectively as the tested more informed strategies, which require higher levels of inter-defender co-ordination. Moreover, it is shown that aspects of the DHT-structured P2P botnets behave as statistically nonergodic processes, when viewed from the perspective of stochastic processes. This suggests that although optimal Sybil placement strategies appear to exist they would need carefully tuning to each specific P2P botnet instance.
Misalignment angles estimation of strapdown inertial navigation system (INS) using global positioning system (GPS) data is highly affected by measurement noises, especially with noises displaying time varying statistical properties. Hence, adaptive filtering approach is recommended for the purpose of improving the accuracy of in-motion alignment. In this paper, a simplified form of Celso's adaptive stochastic filtering is derived and applied to estimate both the INS error states and measurement noise statistics. To detect and bound the influence of outliers in INS/GPS integration, outlier detection based on jerk tracking model is also proposed. The accuracy and validity of the proposed algorithm is tested through ground based navigation experiments.
The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algoritlun and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.
To protect complex power-grid control networks, power operators need efficient security assessment techniques that take into account both cyber side and the power side of the cyber-physical critical infrastructures. In this paper, we present CPINDEX, a security-oriented stochastic risk management technique that calculates cyber-physical security indices to measure the security level of the underlying cyber-physical setting. CPINDEX installs appropriate cyber-side instrumentation probes on individual host systems to dynamically capture and profile low-level system activities such as interprocess communications among operating system assets. CPINDEX uses the generated logs along with the topological information about the power network configuration to build stochastic Bayesian network models of the whole cyber-physical infrastructure and update them dynamically based on the current state of the underlying power system. Finally, CPINDEX implements belief propagation algorithms on the created stochastic models combined with a novel graph-theoretic power system indexing algorithm to calculate the cyber-physical index, i.e., to measure the security-level of the system's current cyber-physical state. The results of our experiments with actual attacks against a real-world power control network shows that CPINDEX, within few seconds, can efficiently compute the numerical indices during the attack that indicate the progressing malicious attack correctly.
Distributed and parallel applications are critical information technology systems in multiple industries, including academia, military, government, financial, medical, and transportation. These applications present target rich environments for malicious attackers seeking to disrupt the confidentiality, integrity and availability of these systems. Applying the military concept of defense cyber maneuver to these systems can provide protection and defense mechanisms that allow survivability and operational continuity. Understanding the tradeoffs between information systems security and operational performance when applying maneuver principles is of interest to administrators, users, and researchers. To this end, we present a model of a defensive maneuver cyber platform using Stochastic Petri Nets. This model enables the understanding and evaluation of the costs and benefits of maneuverability in a distributed application environment, specifically focusing on moving target defense and deceptive defense strategies.
The evolution of electrical grids, both in terms of enhanced ICT functionalities to improve efficiency, reliability and economics, as well as the increasing penetration of renewable redistributed energy resources, results in a more sophisticated electrical infrastructure which poses new challenges from several perspectives, including resilience and quality of service analysis. In addition, the presence of interdependencies, which more and more characterize critical infrastructures (including the power sector), exacerbates the need for advanced analysis approaches, to be possibly employed since the early phases of the system design, to identify vulnerabilities and appropriate countermeasures. In this paper, we outline an approach to model and analyze smart grids and discuss the major challenges to be addressed in stochastic model-based analysis to account for the peculiarities of the involved system elements. Representation of dynamic and flexible behavior of generators and loads, as well as representation of the complex ICT control functions required to preserve and/or re-establish electrical equilibrium in presence of changes need to be faced to assess suitable indicators of the resilience and quality of service of the smart grid.
This paper deals with the robust H∞ cyber-attacks estimation problem for control systems under stochastic cyber-attacks and disturbances. The focus is on designing a H∞ filter which maximize the attack sensitivity and minimize the effect of disturbances. The design requires not only the disturbance attenuation, but also the residual to remain the attack sensitivity as much as possible while the effect of disturbance is minimized. A stochastic model of control system with stochastic cyber-attacks which satisfy the Markovian stochastic process is constructed. And we also present the stochastic attack models that a control system is possibly exposed to. Furthermore, applying H∞ filtering technique-based on linear matrix inequalities (LMIs), the paper obtains sufficient conditions that ensure the filtering error dynamic is asymptotically stable and satisfies a prescribed ratio between cyber-attack sensitivity and disturbance sensitivity. Finally, the results are applied to the control of a Quadruple-tank process (QTP) under a stochastic cyber-attack and a stochastic disturbance. The simulation results underline that the designed filters is effective and feasible in practical application.