Biblio
With the rapid progression of Information and Communication Technology (ICT) and especially of Internet of Things (IoT), the conventional electrical grid is transformed into a new intelligent paradigm, known as Smart Grid (SG). SG provides significant benefits both for utility companies and energy consumers such as the two-way communication (both electricity and information), distributed generation, remote monitoring, self-healing and pervasive control. However, at the same time, this dependence introduces new security challenges, since SG inherits the vulnerabilities of multiple heterogeneous, co-existing legacy and smart technologies, such as IoT and Industrial Control Systems (ICS). An effective countermeasure against the various cyberthreats in SG is the Intrusion Detection System (IDS), informing the operator timely about the possible cyberattacks and anomalies. In this paper, we provide an anomaly-based IDS especially designed for SG utilising operational data from a real power plant. In particular, many machine learning and deep learning models were deployed, introducing novel parameters and feature representations in a comparative study. The evaluation analysis demonstrated the efficacy of the proposed IDS and the improvement due to the suggested complex data representation.
In this paper, decentralized dynamic power allocation problem has been investigated for mobile ad hoc network (MANET) at tactical edge. Due to the mobility and self-organizing features in MANET and environmental uncertainties in the battlefield, many existing optimal power allocation algorithms are neither efficient nor practical. Furthermore, the continuously increasing large scale of the wireless connection population in emerging Internet of Battlefield Things (IoBT) introduces additional challenges for optimal power allocation due to the “Curse of Dimensionality”. In order to address these challenges, a novel Actor-Critic-Mass algorithm is proposed by integrating the emerging Mean Field game theory with online reinforcement learning. The proposed approach is able to not only learn the optimal power allocation for IoBT in a decentralized manner, but also effectively handle uncertainties from harsh environment at tactical edge. In the developed scheme, each agent in IoBT has three neural networks (NN), i.e., 1) Critic NN learns the optimal cost function that minimizes the Signal-to-interference-plus-noise ratio (SINR), 2) Actor NN estimates the optimal transmitter power adjustment rate, and 3) Mass NN learns the probability density function of all agents' transmitting power in IoBT. The three NNs are tuned based on the Fokker-Planck-Kolmogorov (FPK) and Hamiltonian-Jacobian-Bellman (HJB) equation given in the Mean Field game theory. An IoBT wireless network has been simulated to evaluate the effectiveness of the proposed algorithm. The results demonstrate that the actor-critic-mass algorithm can effectively approximate the probability distribution of all agents' transmission power and converge to the target SINR. Moreover, the optimal decentralized power allocation is obtained through integrated mean-field game theory with reinforcement learning.
Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, a multitude of operational conditions (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a set of research directions are proposed that aim to fundamentally address the issues of trust and trustworthiness in contested battlefield environments, building on prior research in the cybersecurity domain. These research directions focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) Ensuring continued trust of known IoBT assets and systems.
Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, current operational conditions necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a collection of prior-developed cybersecurity techniques is reviewed for applicability to conditions presented by IoBT operational environments (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) through use of supporting case study examples. The research techniques covered focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) ensuring continued trust of known IoT assets and IoBT systems.
In this paper, a novel anti-jamming mechanism is proposed to analyze and enhance the security of adversarial Internet of Battlefield Things (IoBT) systems. In particular, the problem is formulated as a dynamic psychological game between a soldier and an attacker. In this game, the soldier seeks to accomplish a time-critical mission by traversing a battlefield within a certain amount of time, while maintaining its connectivity with an IoBT network. The attacker, on the other hand, seeks to find the optimal opportunity to compromise the IoBT network and maximize the delay of the soldier's IoBT transmission link. The soldier and the attacker's psychological behavior are captured using tools from psychological game theory, with which the soldier's and attacker's intentions to harm one another are considered in their utilities. To solve this game, a novel learning algorithm based on Bayesian updating is proposed to find an ∈ -like psychological self-confirming equilibrium of the game.
Machine-learning solutions are successfully adopted in multiple contexts but the application of these techniques to the cyber security domain is complex and still immature. Among the many open issues that affect security systems based on machine learning, we concentrate on adversarial attacks that aim to affect the detection and prediction capabilities of machine-learning models. We consider realistic types of poisoning and evasion attacks targeting security solutions devoted to malware, spam and network intrusion detection. We explore the possible damages that an attacker can cause to a cyber detector and present some existing and original defensive techniques in the context of intrusion detection systems. This paper contains several performance evaluations that are based on extensive experiments using large traffic datasets. The results highlight that modern adversarial attacks are highly effective against machine-learning classifiers for cyber detection, and that existing solutions require improvements in several directions. The paper paves the way for more robust machine-learning-based techniques that can be integrated into cyber security platforms.
Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.
Deep neural networks are widely used in many walks of life. Techniques such as transfer learning enable neural networks pre-trained on certain tasks to be retrained for a new duty, often with much less data. Users have access to both pre-trained model parameters and model definitions along with testing data but have either limited access to training data or just a subset of it. This is risky for system-critical applications, where adversarial information can be maliciously included during the training phase to attack the system. Determining the existence and level of attack in a model is challenging. In this paper, we present evidence on how adversarially attacking training data increases the boundary of model parameters using as an example of a CNN model and the MNIST data set as a test. This expansion is due to new characteristics of the poisonous data that are added to the training data. Approaching the problem from the feature space learned by the network provides a relation between them and the possible parameters taken by the model on the training phase. An algorithm is proposed to determine if a given network was attacked in the training by comparing the boundaries of parameters distribution on intermediate layers of the model estimated by using the Maximum Entropy Principle and the Variational inference approach.
Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.
DNS based domain name resolution has been known as one of the most fundamental Internet services. In the meanwhile, DNS cache poisoning attacks also have become a critical threat in the cyber world. In addition to Kaminsky attacks, the falsified data from the compromised authoritative DNS servers also have become the threats nowadays. Several solutions have been proposed in order to prevent DNS cache poisoning attacks in the literature for the former case such as DNSSEC (DNS Security Extensions), however no effective solutions have been proposed for the later case. Moreover, due to the performance issue and significant workload increase on DNS cache servers, DNSSEC has not been deployed widely yet. In this work, we propose an advanced detection method against DNS cache poisoning attacks using machine learning techniques. In the proposed method, in addition to the basic 5-tuple information of a DNS packet, we intend to add a lot of special features extracted based on the standard DNS protocols as well as the heuristic aspects such as “time related features”, “GeoIP related features” and “trigger of cached DNS data”, etc., in order to identify the DNS response packets used for cache poisoning attacks especially those from compromised authoritative DNS servers. In this paper, as a work in progress, we describe the basic idea and concept of our proposed method as well as the intended network topology of the experimental environment while the prototype implementation, training data preparation and model creation as well as the evaluations will belong to the future work.
As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we use an ensembled semi-supervised approach to determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.
Internet application providers now have more incentive than ever to collect user data, which greatly increases the risk of user privacy violations due to the emerging of deep neural networks. In this paper, we propose TensorClog-a poisoning attack technique that is designed for privacy protection against deep neural networks. TensorClog has three properties with each of them serving a privacy protection purpose: 1) training on TensorClog poisoned data results in lower inference accuracy, reducing the incentive of abusive data collection; 2) training on TensorClog poisoned data converges to a larger loss, which prevents the neural network from learning the privacy; and 3) TensorClog regularizes the perturbation to remain a high structure similarity, so that the poisoning does not affect the actual content in the data. Applying our TensorClog poisoning technique to CIFAR-10 dataset results in an increase in both converged training loss and test error by 300% and 272%, respectively. It manages to maintain data's human perception with a high SSIM index of 0.9905. More experiments including different limited information attack scenarios and a real-world application transferred from pre-trained ImageNet models are presented to further evaluate TensorClog's effectiveness in more complex situations.
Context: Software security is an imperative aspect of software quality. Early detection of vulnerable code during development can better ensure the security of the codebase and minimize testing efforts. Although traditional software metrics are used for early detection of vulnerabilities, they do not clearly address the granularity level of the issue to precisely pinpoint vulnerabilities. The goal of this study is to employ method-level traceable patterns (nano-patterns) in vulnerability prediction and empirically compare their performance with traditional software metrics. The concept of nano-patterns is similar to design patterns, but these constructs can be automatically recognized and extracted from source code. If nano-patterns can better predict vulnerable methods compared to software metrics, they can be used in developing vulnerability prediction models with better accuracy. Aims: This study explores the performance of method-level patterns in vulnerability prediction. We also compare them with method-level software metrics. Method: We studied vulnerabilities reported for two major releases of Apache Tomcat (6 and 7), Apache CXF, and two stand-alone Java web applications. We used three machine learning techniques to predict vulnerabilities using nano-patterns as features. We applied the same techniques using method-level software metrics as features and compared their performance with nano-patterns. Results: We found that nano-patterns show lower false negative rates for classifying vulnerable methods (for Tomcat 6, 21% vs 34.7%) and therefore, have higher recall in predicting vulnerable code than the software metrics used. On the other hand, software metrics show higher precision than nano-patterns (79.4% vs 76.6%). Conclusion: In summary, we suggest developers use nano-patterns as features for vulnerability prediction to augment existing approaches as these code constructs outperform standard metrics in terms of prediction recall.
With widely applied in various fields, deep learning (DL) is becoming the key driving force in industry. Although it has achieved great success in artificial intelligence tasks, similar to traditional software, it has defects that, once it failed, unpredictable accidents and losses would be caused. In this paper, we propose a test cases generation technique based on an adversarial samples generation algorithm for image classification deep neural networks (DNNs), which can generate a large number of good test cases for the testing of DNNs, especially in case that test cases are insufficient. We briefly introduce our method, and implement the framework. We conduct experiments on some classic DNN models and datasets. We further evaluate the test set by using a coverage metric based on states of the DNN.
In transient distributed cloud computing environment, software is vulnerable to attack, which leads to software functional completeness, so it is necessary to carry out functional testing. In order to solve the problem of high overhead and high complexity of unsupervised test methods, an intelligent evaluation method for transient analysis software function testing based on active depth learning algorithm is proposed. Firstly, the active deep learning mathematical model of transient analysis software function test is constructed by using association rule mining method, and the correlation dimension characteristics of software function failure are analyzed. Then the reliability of the software is measured by the spectral density distribution method of software functional completeness. The intelligent evaluation model of transient analysis software function testing is established in the transient distributed cloud computing environment, and the function testing and reliability intelligent evaluation are realized. Finally, the performance of the transient analysis software is verified by the simulation experiment. The results show that the accuracy of the software functional integrity positioning is high and the intelligent evaluation of the transient analysis software function testing has a good self-adaptability by using this method to carry out the function test of the transient analysis software. It ensures the safe and reliable operation of the software.
Recent years, more and more testing criteria for deep learning systems has been proposed to ensure system robustness and reliability. These criteria were defined based on different perspectives of diversity. However, there lacks comprehensive investigation on what are the most essential diversities that should be considered by a testing criteria for deep learning systems. Therefore, in this paper, we conduct an empirical study to investigate the relation between test diversities and erroneous behaviors of deep learning models. We define five metrics to reflect diversities in neuron activities, and leverage metamorphic testing to detect erroneous behaviors. We investigate the correlation between metrics and erroneous behaviors. We also go further step to measure the quality of test suites under the guidance of defined metrics. Our results provided comprehensive insights on the essential diversities for testing criteria to exhibit good fault detection ability.
Software vulnerabilities often remain hidden until an attacker exploits the weak/insecure code. Therefore, testing the software from a vulnerability discovery perspective becomes challenging for developers if they do not inspect their code thoroughly (which is time-consuming). We propose that vulnerability prediction using certain software metrics can support the testing process by identifying vulnerable code-components (e.g., functions, classes, etc.). Once a code-component is predicted as vulnerable, the developers can focus their testing efforts on it, thereby avoiding the time/effort required for testing the entire application. The current paper presents a study that compares how software metrics perform as vulnerability predictors for software projects developed in two different languages (Java vs Python). The goal of this research is to analyze the vulnerability prediction performance of software metrics for different programming languages. We designed and conducted experiments on security vulnerabilities reported for three Java projects (Apache Tomcat 6, Tomcat 7, Apache CXF) and two Python projects (Django and Keystone). In this paper, we focus on a specific type of code component: Functions. We apply Machine Learning models for predicting vulnerable functions. Overall results show that software metrics-based vulnerability prediction is more useful for Java projects than Python projects (i.e., software metrics when used as features were able to predict Java vulnerable functions with a higher recall and precision compared to Python vulnerable functions prediction).