Title | Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Zhao, Haojun, Lin, Yun, Gao, Song, Yu, Shui |
Conference Name | GLOBECOM 2020 - 2020 IEEE Global Communications Conference |
Date Published | dec |
Keywords | adversarial attacks, convergence, Deep Learning, Iterative methods, Measurement, Metrics, modulation, modulation recognition, Perturbation methods, Predictive models, predictive security metrics, pubcrawl, Task Analysis, wireless security |
Abstract | The discovery of adversarial examples poses a serious risk to the deep neural networks (DNN). By adding a subtle perturbation that is imperceptible to the human eye, a well-behaved DNN model can be easily fooled and completely change the prediction categories of the input samples. However, research on adversarial attacks in the field of modulation recognition mainly focuses on increasing the prediction error of the classifier, while ignores the importance of decreasing the perceptual invisibility of attack. Aiming at the task of DNNbased modulation recognition, this study designs the Fitting Difference as a metric to measure the perturbed waveforms and proposes a new method: the Nesterov Adam Iterative Method to generate adversarial examples. We show that the proposed algorithm not only exerts excellent white-box attacks but also can initiate attacks on a black-box model. Moreover, our method decreases the waveform perceptual invisibility of attacks to a certain degree, thereby reducing the risk of an attack being detected. |
DOI | 10.1109/GLOBECOM42002.2020.9322088 |
Citation Key | zhao_evaluating_2020 |