Visible to the public Biblio

Filters: Author is Zhao, Haojun  [Clear All Filters]
2021-10-12
Zhao, Haojun, Lin, Yun, Gao, Song, Yu, Shui.  2020.  Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–5.
The discovery of adversarial examples poses a serious risk to the deep neural networks (DNN). By adding a subtle perturbation that is imperceptible to the human eye, a well-behaved DNN model can be easily fooled and completely change the prediction categories of the input samples. However, research on adversarial attacks in the field of modulation recognition mainly focuses on increasing the prediction error of the classifier, while ignores the importance of decreasing the perceptual invisibility of attack. Aiming at the task of DNNbased modulation recognition, this study designs the Fitting Difference as a metric to measure the perturbed waveforms and proposes a new method: the Nesterov Adam Iterative Method to generate adversarial examples. We show that the proposed algorithm not only exerts excellent white-box attacks but also can initiate attacks on a black-box model. Moreover, our method decreases the waveform perceptual invisibility of attacks to a certain degree, thereby reducing the risk of an attack being detected.
2021-07-27
Bao, Zhida, Zhao, Haojun.  2020.  Evaluation of Adversarial Attacks Based on DL in Communication Networks. 2020 7th International Conference on Dependable Systems and Their Applications (DSA). :251–252.
Deep Neural Networks (DNN) have strong capabilities of memories, feature identifications and automatic analyses, solving various complex problems. However, DNN classifiers have obvious fragility that adding several unnoticeable perturbations to the original examples will lead to the errors in the classifier identification. In the field of communications, the adversarial examples will greatly reduce the accuracy of the signal identification, causing great information security risks. Considering the adversarial examples pose a serious threat to the security of the DNN models, studying their generation mechanisms and testing their attack effects are critical to ensuring the information security of the communication networks. This paper will study the generation of the adversarial examples and the influences of the adversarial examples on the accuracy of the DNN-based communication signal identification. Meanwhile, this paper will study the influences of the adversarial examples under the white-box models and black-box models, and explore the adversarial attack influences of the factors such as perturbation levels and iterative steps. The insights of this study would be helpful for ensuring the security of information networks and designing robust DNN communication networks.