Title | Evaluation of Adversarial Attacks Based on DL in Communication Networks |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Bao, Zhida, Zhao, Haojun |
Conference Name | 2020 7th International Conference on Dependable Systems and Their Applications (DSA) |
Date Published | nov |
Keywords | adversarial example, Black Box Attacks, Communication networks, communication security, composability, Deep Neural Network, Individual Identification, Information security, Metrics, Neural networks, Perturbation methods, pubcrawl, reliability, Resiliency, Testing |
Abstract | Deep Neural Networks (DNN) have strong capabilities of memories, feature identifications and automatic analyses, solving various complex problems. However, DNN classifiers have obvious fragility that adding several unnoticeable perturbations to the original examples will lead to the errors in the classifier identification. In the field of communications, the adversarial examples will greatly reduce the accuracy of the signal identification, causing great information security risks. Considering the adversarial examples pose a serious threat to the security of the DNN models, studying their generation mechanisms and testing their attack effects are critical to ensuring the information security of the communication networks. This paper will study the generation of the adversarial examples and the influences of the adversarial examples on the accuracy of the DNN-based communication signal identification. Meanwhile, this paper will study the influences of the adversarial examples under the white-box models and black-box models, and explore the adversarial attack influences of the factors such as perturbation levels and iterative steps. The insights of this study would be helpful for ensuring the security of information networks and designing robust DNN communication networks. |
DOI | 10.1109/DSA51864.2020.00044 |
Citation Key | bao_evaluation_2020 |