Visible to the public Biblio

Filters: Keyword is adversarial samples  [Clear All Filters]
2021-09-07
Huang, Weiqing, Peng, Xiao, Shi, Zhixin, Ma, Yuru.  2020.  Adversarial Attack against LSTM-Based DDoS Intrusion Detection System. 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI). :686–693.
Nowadays, machine learning is a popular method for DDoS detection. However, machine learning algorithms are very vulnerable under the attacks of adversarial samples. Up to now, multiple methods of generating adversarial samples have been proposed. However, they cannot be applied to LSTM-based DDoS detection directly because of the discrete property and the utility requirement of its input samples. In this paper, we propose two methods to generate DDoS adversarial samples, named Genetic Attack (GA) and Probability Weighted Packet Saliency Attack (PWPSA) respectively. Both methods modify original input sample by inserting or replacing partial packets. In GA, we evolve a set of modified samples with genetic algorithm and find the evasive variant from it. In PWPSA, we modify original sample iteratively and use the position saliency as well as the packet score to determine insertion or replacement order at each step. Experimental results on CICIDS2017 dataset show that both methods can bypass DDoS detectors with high success rate.
2021-07-27
Xu, Jiahui, Wang, Chen, Li, Tingting, Xiang, Fengtao.  2020.  Improved Adversarial Attack against Black-box Machine Learning Models. 2020 Chinese Automation Congress (CAC). :5907–5912.
The existence of adversarial samples makes the security of machine learning models in practical application questioned, especially the black-box adversarial attack, which is very close to the actual application scenario. Efficient search for black-box attack samples is helpful to train more robust models. We discuss the situation that the attacker can get nothing except the final predict label. As for this problem, the current state-of-the-art method is Boundary Attack(BA) and its variants, such as Biased Boundary Attack(BBA), however it still requires large number of queries and kills a lot of time. In this paper, we propose a novel method to solve these shortcomings. First, we improved the algorithm for generating initial adversarial samples with smaller L2 distance. Second, we innovatively combine a swarm intelligence algorithm - Particle Swarm Optimization(PSO) with Biased Boundary Attack and propose PSO-BBA method. Finally, we experiment on ImageNet dataset, and compared our algorithm with the baseline algorithm. The results show that:(1)our improved initial point selection algorithm effectively reduces the number of queries;(2)compared with the most advanced methods, our PSO-BBA method improves the convergence speed while ensuring the attack accuracy;(3)our method has a good effect on both targeted attack and untargeted attack.
2021-03-29
Yilmaz, I., Masum, R., Siraj, A..  2020.  Addressing Imbalanced Data Problem with Generative Adversarial Network For Intrusion Detection. 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI). :25–30.

Machine learning techniques help to understand underlying patterns in datasets to develop defense mechanisms against cyber attacks. Multilayer Perceptron (MLP) technique is a machine learning technique used in detecting attack vs. benign data. However, it is difficult to construct any effective model when there are imbalances in the dataset that prevent proper classification of attack samples in data. In this research, we use UGR'16 dataset to conduct data wrangling initially. This technique helps to prepare a test set from the original dataset to train the neural network model effectively. We experimented with a series of inputs of varying sizes (i.e. 10000, 50000, 1 million) to observe the performance of the MLP neural network model with distribution of features over accuracy. Later, we use Generative Adversarial Network (GAN) model that produces samples of different attack labels (e.g. blacklist, anomaly spam, ssh scan) for balancing the dataset. These samples are generated based on data from the UGR'16 dataset. Further experiments with MLP neural network model shows that a balanced attack sample dataset, made possible with GAN, produces more accurate results than an imbalanced one.

2021-03-09
Cui, W., Li, X., Huang, J., Wang, W., Wang, S., Chen, J..  2020.  Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation. 2020 IEEE International Conference on Image Processing (ICIP). :648–652.
Although deep convolutional neural network (CNN) performs well in many computer vision tasks, its classification mechanism is very vulnerable when it is exposed to the perturbation of adversarial attacks. In this paper, we proposed a new algorithm to generate the substitute model of black-box CNN models by using knowledge distillation. The proposed algorithm distills multiple CNN teacher models to a compact student model as the substitution of other black-box CNN models to be attacked. The black-box adversarial samples can be consequently generated on this substitute model by using various white-box attacking methods. According to our experiments on ResNet18 and DenseNet121, our algorithm boosts the attacking success rate (ASR) by 20% by training the substitute model based on knowledge distillation.
2020-11-02
Huang, S., Chen, Q., Chen, Z., Chen, L., Liu, J., Yang, S..  2019.  A Test Cases Generation Technique Based on an Adversarial Samples Generation Algorithm for Image Classification Deep Neural Networks. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :520–521.

With widely applied in various fields, deep learning (DL) is becoming the key driving force in industry. Although it has achieved great success in artificial intelligence tasks, similar to traditional software, it has defects that, once it failed, unpredictable accidents and losses would be caused. In this paper, we propose a test cases generation technique based on an adversarial samples generation algorithm for image classification deep neural networks (DNNs), which can generate a large number of good test cases for the testing of DNNs, especially in case that test cases are insufficient. We briefly introduce our method, and implement the framework. We conduct experiments on some classic DNN models and datasets. We further evaluate the test set by using a coverage metric based on states of the DNN.

2020-09-04
Wu, Yi, Liu, Jian, Chen, Yingying, Cheng, Jerry.  2019.  Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples. 2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN). :1—5.
As automatic speech recognition (ASR) systems have been integrated into a diverse set of devices around us in recent years, security vulnerabilities of them have become an increasing concern for the public. Existing studies have demonstrated that deep neural networks (DNNs), acting as the computation core of ASR systems, is vulnerable to deliberately designed adversarial attacks. Based on the gradient descent algorithm, existing studies have successfully generated adversarial samples which can disturb ASR systems and produce adversary-expected transcript texts designed by adversaries. Most of these research simulated white-box attacks which require knowledge of all the components in the targeted ASR systems. In this work, we propose the first semi-black-box attack against the ASR system - Kaldi. Requiring only partial information from Kaldi and none from DNN, we can embed malicious commands into a single audio chip based on the gradient-independent genetic algorithm. The crafted audio clip could be recognized as the embedded malicious commands by Kaldi and unnoticeable to humans in the meanwhile. Experiments show that our attack can achieve high attack success rate with unnoticeable perturbations to three types of audio clips (pop music, pure music, and human command) without the need of the underlying DNN model parameters and architecture.
2019-01-16
Gao, J., Lanchantin, J., Soffa, M. L., Qi, Y..  2018.  Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers. 2018 IEEE Security and Privacy Workshops (SPW). :50–56.

Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to a black-box attack, which is a more realistic scenario. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input. We develop novel scoring strategies to find the most important words to modify such that the deep classifier makes a wrong prediction. Simple character-level transformations are applied to the highest-ranked words in order to minimize the edit distance of the perturbation. We evaluated DeepWordBug on two real-world text datasets: Enron spam emails and IMDB movie reviews. Our experimental results indicate that DeepWordBug can reduce the classification accuracy from 99% to 40% on Enron and from 87% to 26% on IMDB. Our results strongly demonstrate that the generated adversarial sequences from a deep-learning model can similarly evade other deep models.

Bai, X., Niu, W., Liu, J., Gao, X., Xiang, Y., Liu, J..  2018.  Adversarial Examples Construction Towards White-Box Q Table Variation in DQN Pathfinding Training. 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC). :781–787.

As a new research hotspot in the field of artificial intelligence, deep reinforcement learning (DRL) has achieved certain success in various fields such as robot control, computer vision, natural language processing and so on. At the same time, the possibility of its application being attacked and whether it have a strong resistance to strike has also become a hot topic in recent years. Therefore, we select the representative Deep Q Network (DQN) algorithm in deep reinforcement learning, and use the robotic automatic pathfinding application as a countermeasure application scenario for the first time, and attack DQN algorithm against the vulnerability of the adversarial samples. In this paper, we first use DQN to find the optimal path, and analyze the rules of DQN pathfinding. Then, we propose a method that can effectively find vulnerable points towards White-Box Q table variation in DQN pathfinding training. Finally, we build a simulation environment as a basic experimental platform to test our method, through multiple experiments, we can successfully find the adversarial examples and the experimental results show that the supervised method we proposed is effective.