Visible to the public Biblio

Filters: Author is Yu, Jia ao  [Clear All Filters]
2021-05-20
Yu, Jia ao, Peng, Lei.  2020.  Black-box Attacks on DNN Classifier Based on Fuzzy Adversarial Examples. 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP). :965—969.
The security of deep learning becomes increasing important with the more and more related applications. The adversarial attack is the known method that makes the performance of deep learning network (DNN) decline rapidly. However, adversarial attack needs the gradient knowledge of the target networks to craft the specific adversarial examples, which is the white-box attack and hardly becomes true in reality. In this paper, we implement a black-box attack on DNN classifier via a functionally equivalent network without knowing the internal structure and parameters of the target networks. And we increase the entropy of the noise via deep convolution generative adversarial networks (DCGAN) to make it seems fuzzier, avoiding being probed and eliminated easily by adversarial training. Experiments show that this method can produce a large number of adversarial examples quickly in batch and the target network cannot improve its accuracy via adversarial training simply.