Feature Denoising for Improving Adversarial Robustness
Title | Feature Denoising for Improving Adversarial Robustness |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Xie, Cihang, Wu, Yuxin, Maaten, Laurens van der, Yuille, Alan L., He, Kaiming |
Conference Name | 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
ISBN Number | 978-1-7281-3293-8 |
Keywords | 10-iteration PGD white-box attacks, 2000-iteration PGD white-box attacks, adversarial attacks, adversarial perturbations, Adversarial robustness, Adversarial training, black-box attack settings, categorization, composability, compositionality, convolutional networks, Deep Learning, feature denoising, feature extraction, image classification, image classification systems, image denoising, Iterative methods, learning (artificial intelligence), Metrics, network architectures, nonlocal means, pattern classification, pubcrawl, Recognition: Detection, resilience, Resiliency, retrieval, security of data, White Box Security |
Abstract | Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 -- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training. |
URL | https://ieeexplore.ieee.org/document/8954372/ |
DOI | 10.1109/CVPR.2019.00059 |
Citation Key | xie_feature_2019 |
- image classification systems
- White Box Security
- security of data
- retrieval
- Resiliency
- resilience
- Recognition: Detection
- pubcrawl
- pattern classification
- nonlocal means
- network architectures
- Metrics
- learning (artificial intelligence)
- Iterative methods
- image denoising
- 10-iteration PGD white-box attacks
- image classification
- feature extraction
- feature denoising
- deep learning
- convolutional networks
- Compositionality
- composability
- categorization
- black-box attack settings
- Adversarial training
- Adversarial robustness
- adversarial perturbations
- adversarial attacks
- 2000-iteration PGD white-box attacks