Biblio
In this article, we study the transmission secrecy performance of primary user in overlay cognitive wireless networks, in which an untrusted energy-limited secondary cooperative user assists the primary transmission to exchange for the spectrum resource. In the network, the information can be simultaneously transmitted through the direct and relay links. For the enhancement of primary transmission security, a maximum ratio combining (MRC) scheme is utilized by the receiver to exploit the two copies of source information. For the security analysis, we firstly derive the tight lower bound expression for secrecy outage probability (SOP). Then, three asymptotic expressions for SOP are also expressed to further analyze the impacts of the transmit power and the location of secondary cooperative node on the primary user information security. The findings show that the primary user information secrecy performance enhances with the improvement of transmit power. Moreover, the smaller the distance between the secondary node and the destination, the better the primary secrecy performance.
Primary user emulation (PUE) attack causes security issues in a cognitive radio network (CRN) while sensing the unused spectrum. In PUE attack, malicious users transmit an emulated primary signal in spectrum sensing interval to secondary users (SUs) to forestall them from accessing the primary user (PU) spectrum bands. In the present paper, the defense against such attack by Neyman-Pearson criterion is shown in terms of total error probability. Impact of several parameters such as attacker strength, attacker's presence probability, and signal-to-noise ratio on SU is shown. Result shows proposed method protect the harmful effects of PUE attack in spectrum sensing.
Key derivation from the physical layer features of the communication channels is a promising approach which can help the key management and security enhancement in communication networks. In this paper, we consider a key generation technique that quantizes the received signal phase to obtain the secret keys. We then study the effect of a jamming attack on this system. The jammer is an active attacker that tries to make a disturbance in the key derivation procedure and changes the phase of the received signal by transmitting an adversary signal. We evaluate the effect of jamming on the security performance of the system and show the ways to improve this performance. Our numerical results show that more phase quantization regions limit the probability of successful attacks.
This paper studies the physical layer security performance of a Simultaneous Wireless Information and Power Transfer (SWIPT) millimeter wave (mmWave) ultra-dense network under a stochastic geometry framework. Specifically, we first derive the energy-information coverage probability and secrecy probability in the considered system under time switching policies. Then the effective secrecy throughput (EST) which can characterize the trade-off between the energy coverage, secure and reliable transmission performance is derived. Theoretical analyses and simulation results reveal the design insights into the effects of various network parameters like, transmit power, time switching factor, transmission rate, confidential information rate, etc, on the secrecy performance. Specifically, it is impossible to realize the effective secrecy throughput improvement just by increasing the transmit power.
The risk of large-scale blackouts and cascading failures in power grids can be due to vulnerable transmission lines and lack of proper remediation techniques after recognizing the first failure. In this paper, we assess the vulnerability of a system using fault chain theory and a power flow-based method, and calculate the probability of large-scale blackout. Further, we consider a Remedial Action Scheme (RAS) to reduce the vulnerability of the system and to harden the critical components against intentional attacks. To identify the most critical lines more efficiently, a new vulnerability index is presented. The effectiveness of the new index and the impact of the applied RAS is illustrated on the IEEE 14-bus test system.
We report a an experimental study of device-independent quantum random number generation based on an detection-loophole free Bell test with entangled photons. After considering statistical fluctuations and applying an 80 Gb × 45.6 Mb Toeplitz matrix hashing, we achieve a final random bit rate of 114 bits/s, with a failure probability less than 10-5.
This paper provides a Common Vulnerability Scoring System (CVSS) metric-based technique for classifying and analysing the prevailing Computer Network Security Vulnerabilities and Threats (CNSVT). The problem that is addressed in this paper, is that, at the time of writing this paper, there existed no effective approaches for analysing and classifying CNSVT for purposes of assessments based on CVSS metrics. The authors of this paper have achieved this by generating a CVSS metric-based dynamic Vulnerability Analysis Classification Countermeasure (VACC) criterion that is able to rank vulnerabilities. The CVSS metric-based VACC has allowed the computation of vulnerability Similarity Measure (VSM) using the Hamming and Euclidean distance metric functions. Nevertheless, the CVSS-metric based on VACC also enabled the random measuring of the VSM for a selected number of vulnerabilities based on the [Ma-Ma], [Ma-Mi], [Mi-Ci], [Ma-Ci] ranking score. This is a technique that is aimed at allowing security experts to be able to conduct proper vulnerability detection and assessments across computer-based networks based on the perceived occurrence by checking the probability that given threats will occur or not. The authors have also proposed high-level countermeasures of the vulnerabilities that have been listed. The authors have evaluated the CVSS-metric based VACC and the results are promising. Based on this technique, it is worth noting that these propositions can help in the development of stronger computer and network security tools.
With the growth of smartphone sales and app usage, fingerprinting and identification of smartphone apps have become a considerable threat to user security and privacy. Traffic analysis is one of the most common methods for identifying apps. Traditional countermeasures towards traffic analysis includes traffic morphing and multipath routing. The basic idea of multipath routing is to increase the difficulty for adversary to eavesdrop all traffic by splitting traffic into several subflows and transmitting them through different routes. Previous works in multipath routing mainly focus on Wireless Sensor Networks (WSNs) or Mobile Ad Hoc Networks (MANETs). In this paper, we propose a multipath routing scheme for smartphones with edge network assistance to mitigate traffic analysis attack. We consider an adversary with limited capability, that is, he can only intercept the traffic of one node following certain attack probability, and try to minimize the traffic an adversary can intercept. We formulate our design as a flow routing optimization problem. Then a heuristic algorithm is proposed to solve the problem. Finally, we present the simulation results for our scheme and justify that our scheme can effectively protect smartphones from traffic analysis attack.
SYN flood attack is a very serious cause for disturbing the normal traffic in MANET. SYN flood attack takes advantage of the congestion caused by populating a specific route with unwanted traffic that results in the denial of services. In this paper, we proposed an Adaptive Detection Mechanism using Artificial Intelligence technique named as SYN Flood Attack Detection Based on Bayes Estimator (SFADBE) for Mobile ad hoc Network (MANET). In SFADBE, every node will gather the current information of the available channel and the secure and congested free (Best Path) channel for the traffic is selected. Due to constant congestion, the availability of the data path can be the cause of SYN Flood attack. By using this AI technique, we experienced the SYN Flood detection probability more than the others did. Simulation results show that our proposed SFADBE algorithm is low cost and robust as compared to the other existing approaches.
A new approach of a formalism of hybrid automatons has been proposed for the analysis of conflict processes between the information system and the information's security malefactor. An example of probability-based assessment on malefactor's victory has been given and the possibility to abstract from a specific type of probability density function for the residence time of parties to the conflict in their possible states. A model of the distribution of destructive informational influences in the information system to connect the process of spread of destructive information processes and the process of changing subjects' states of the information system has been proposed. An example of the destructive information processes spread analysis has been given.
Pre-Silicon hardware Trojan detection has been studied for years. The most popular benchmark circuits are from the Trust-Hub. Their common feature is that the probability of activating hardware Trojans is very low. This leads to a series of machine learning based hardware Trojan detection methods which try to find the nets with low signal probability of 0 or 1. On the other hand, it is considered that, if the probability of activating hardware Trojans is high, these hardware Trojans can be easily found through behaviour simulations or during functional test. This paper explores the "grey zone" between these two opposite scenarios: if the activation probability of a hardware Trojan is not low enough for machine learning to detect it and is not high enough for behaviour simulation or functional test to find it, it can escape from detection. Experiments show the existence of such hardware Trojans, and this paper suggests a new set of hardware Trojan benchmark circuits for future study.
In order to evaluate the network security risks and implement effective defenses in industrial control system, a risk assessment method for industrial control systems based on attack graphs is proposed. Use the concept of network security elements to translate network attacks into network state migration problems and build an industrial control network attack graph model. In view of the current subjective evaluation of expert experience, the atomic attack probability assignment method and the CVSS evaluation system were introduced to evaluate the security status of the industrial control system. Finally, taking the centralized control system of the thermal power plant as the experimental background, the case analysis is performed. The experimental results show that the method can comprehensively analyze the potential safety hazards in the industrial control system and provide basis for the safety management personnel to take effective defense measures.
Gaussian random attacks that jointly minimize the amount of information obtained by the operator from the grid and the probability of attack detection are presented. The construction of the attack is posed as an optimization problem with a utility function that captures two effects: firstly, minimizing the mutual information between the measurements and the state variables; secondly, minimizing the probability of attack detection via the Kullback-Leibler (KL) divergence between the distribution of the measurements with an attack and the distribution of the measurements without an attack. Additionally, a lower bound on the utility function achieved by the attacks constructed with imperfect knowledge of the second order statistics of the state variables is obtained. The performance of the attack construction using the sample covariance matrix of the state variables is numerically evaluated. The above results are tested in the IEEE 30-Bus test system.
Quantifying vulnerability and security levels for smart grid diversified link of networks have been a challenging task for a long period of time. Security experts and network administrators used to act based on their proficiencies and practices to mitigate network attacks rather than objective metrics and models. This paper uses the Markov Chain Model [1] to evaluate quantitatively the vulnerabilities associated to the 802.11 Wi-Fi network in a smart grid. Administrator can now assess the level of severity of potential attacks based on determining the probability density of the successive states and thus, providing the corresponding security measures. This model is based on the observed vulnerabilities provided by the Common Vulnerabilities and Exposures (CVE) database explored by MITRE [2] to calculate the Markov processes (states) transitions probabilities and thus, deducing the vulnerability level of the entire attack paths in an attack graph. Cumulative probabilities referring to high vulnerability level in a specific attack path will lead the system administrator to apply appropriate security measures a priori to potential attacks occurrence.
Parameter estimation in wireless sensor networks (WSN) using encrypted non-binary quantized data is studied. In a WSN, sensors transmit their observations to a fusion center through a wireless medium where the observations are susceptible to unauthorized eavesdropping. Encryption approaches for WSNs with fixed threshold binary quantization were previously explored. However, fixed threshold binary quantization limits parameter estimation to scalar parameters. In this paper, we propose a stochastic encryption approach for WSNs that can operate on non-binary quantized observations and has the capability for vector parameter estimation. We extend a binary stochastic encryption approach proposed previously, to a non-binary generalized case. Sensor outputs are quantized using a quantizer with R + 1 levels, where R $ε$ 1, 2, 3,..., encrypted by flipping them with certain flipping probabilities, and then transmitted. Optimal estimators using maximum-likelihood estimation are derived for both a legitimate fusion center (LFC) and a third party fusion center (TPFC) perspectives. We assume the TPFC is unaware of the encryption. Asymptotic analysis of the estimators is performed by deriving the Cramer-Rao lower bound for LFC estimation, and the asymptotic bias and variance for TPFC estimation. Numerical results validating the asymptotic analysis are presented.
It is a challenging problem to preserve the friendly-correlations between individuals when publishing social-network data. To alleviate this problem, uncertain graph has been presented recently. The main idea of uncertain graph is converting an original graph into an uncertain form, where the correlations between individuals is an associated probability. However, the existing methods of uncertain graph lack rigorous guarantees of privacy and rely on the assumption of adversary's knowledge. In this paper we first introduced a general model for constructing uncertain graphs. Then, we proposed an algorithm under the model which is based on differential privacy and made an analysis of algorithm's privacy. Our algorithm provides rigorous guarantees of privacy and against the background knowledge attack. Finally, the algorithm we proposed satisfied differential privacy and showed feasibility in the experiments. And then, we compare our algorithm with (k, ε)-obfuscation algorithm in terms of data utility, the importance of nodes for network in our algorithm is similar to (k, ε)-obfuscation algorithm.
Wireless sensor networks have achieved the substantial research interest in the present time because of their unique features such as fault tolerance, autonomous operation etc. The coverage maximization while considering the resource scarcity is a crucial problem in the wireless sensor networks. The approaches which address these problems and maximize the network lifetime are considered prominent. The node scheduling is such mechanism to address this issue. The scheduling strategy which addresses the target coverage problem based on coverage probability and trust values is proposed in Energy Efficient Coverage Protocol (EECP). In this paper the optimized decision rules is obtained by using the rough set theory to determine the number of active nodes. The results show that the proposed extension results in the lesser number of decision rules to consider in determination of node states in the network, hence it improves the network efficiency by reducing the number of packets transmitted and reducing the overhead.
This paper investigates closed-form expressions to evaluate the performance of the Compressive Sensing (CS) based Energy Detector (ED). The conventional way to approximate the probability density function of the ED test statistic invokes the central limit theorem and considers the decision variable as Gaussian. This approach, however, provides good approximation only if the number of samples is large enough. This is not usually the case in CS framework, where the goal is to keep the sample size low. Moreover, working with a reduced number of measurements is of practical interest for general spectrum sensing in cognitive radio applications, where the sensing time should be sufficiently short since any time spent for sensing cannot be used for data transmission on the detected idle channels. In this paper, we make use of low-complexity approximations based on algebraic transformations of the one-dimensional Gaussian Q-function. More precisely, this paper provides new closed-form expressions for accurate evaluation of the CS-based ED performance as a function of the compressive ratio and the Signal-to-Noise Ratio (SNR). Simulation results demonstrate the increased accuracy of the proposed equations compared to existing works.
As a vital component of variety cyber attacks, malicious domain detection becomes a hot topic for cyber security. Several recent techniques are proposed to identify malicious domains through analysis of DNS data because much of global information in DNS data which cannot be affected by the attackers. The attackers always recycle resources, so they frequently change the domain - IP resolutions and create new domains to avoid detection. Therefore, multiple malicious domains are hosted by the same IPs and multiple IPs also host same malicious domains in simultaneously, which create intrinsic association among them. Hence, using the labeled domains which can be traced back from queries history of all domains to verify and figure out the association of them all. Graphs seem the best candidate to represent for this relationship and there are many algorithms developed on graph with high performance. A graph-based interface can be developed and transformed to the graph mining task of inferring graph node's reputation scores using improvements of the belief propagation algorithm. Then higher reputation scores the nodes reveal, the more malicious probabilities they infer. For demonstration, this paper proposes a malicious domain detection technique and evaluates on a real-world dataset. The dataset is collected from DNS data servers which will be used for building a DNS graph. The proposed technique achieves high performance in accuracy rates over 98.3%, precision and recall rates as: 99.1%, 98.6%. Especially, with a small set of labeled domains (legitimate and malicious domains), the technique can discover a large set of potential malicious domains. The results indicate that the method is strongly effective in detecting malicious domains.