Visible to the public A Two-Layer Moving Target Defense for Image Classification in Adversarial Environment

TitleA Two-Layer Moving Target Defense for Image Classification in Adversarial Environment
Publication TypeConference Paper
Year of Publication2020
AuthorsPeng, Ye, Fu, Guobin, Luo, Yingguang, Yu, Qi, Li, Bin, Hu, Jia
Conference Name2020 IEEE 6th International Conference on Computer and Communications (ICCC)
Date Publisheddec
Keywordsadversarial sample, Deep Learning, defensive technique, image classification, image processing, machine learning, machine learning algorithms, Metrics, Perturbation methods, Predictive models, pubcrawl, Robustness, Scalability, Training
AbstractDeep learning plays an increasingly important role in various fields due to its superior performance, and it also achieves advanced recognition performance in the field of image classification. However, the vulnerability of deep learning in the adversarial environment cannot be ignored, and the prediction result of the model is likely to be affected by the small perturbations added to the samples by the adversary. In this paper, we propose a two-layer dynamic defense method based on defensive techniques pool and retrained branch model pool. First, we randomly select defense methods from the defense pool to process the input. The perturbation ability of the adversarial samples preprocessed by different defense methods changed, which would produce different classification results. In addition, we conduct adversarial training based on the original model and dynamically generate multiple branch models. The classification results of these branch models for the same adversarial sample is inconsistent. We can detect the adversarial samples by using the inconsistencies in the output results of the two layers. The experimental results show that the two-layer dynamic defense method we designed achieves a good defense effect.
DOI10.1109/ICCC51575.2020.9345217
Citation Keypeng_two-layer_2020