Visible to the public Black-box Attacks on DNN Classifier Based on Fuzzy Adversarial Examples

TitleBlack-box Attacks on DNN Classifier Based on Fuzzy Adversarial Examples
Publication TypeConference Paper
Year of Publication2020
AuthorsYu, Jia ao, Peng, Lei
Conference Name2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP)
KeywordsBlack Box Attacks, black-box attack, composability, Deep Learning, Entropy, functionally equivalent network, fuzzy adversarial examples, generative adversarial networks, image processing, Knowledge engineering, Metrics, pubcrawl, resilience, Resiliency, security, Training, white box cryptography
AbstractThe security of deep learning becomes increasing important with the more and more related applications. The adversarial attack is the known method that makes the performance of deep learning network (DNN) decline rapidly. However, adversarial attack needs the gradient knowledge of the target networks to craft the specific adversarial examples, which is the white-box attack and hardly becomes true in reality. In this paper, we implement a black-box attack on DNN classifier via a functionally equivalent network without knowing the internal structure and parameters of the target networks. And we increase the entropy of the noise via deep convolution generative adversarial networks (DCGAN) to make it seems fuzzier, avoiding being probed and eliminated easily by adversarial training. Experiments show that this method can produce a large number of adversarial examples quickly in batch and the target network cannot improve its accuracy via adversarial training simply.
DOI10.1109/ICSIP49896.2020.9339329
Citation Keyyu_black-box_2020