Visible to the public Feature Denoising for Improving Adversarial Robustness

TitleFeature Denoising for Improving Adversarial Robustness
Publication TypeConference Paper
Year of Publication2019
AuthorsXie, Cihang, Wu, Yuxin, Maaten, Laurens van der, Yuille, Alan L., He, Kaiming
Conference Name2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
ISBN Number978-1-7281-3293-8
Keywords10-iteration PGD white-box attacks, 2000-iteration PGD white-box attacks, adversarial attacks, adversarial perturbations, Adversarial robustness, Adversarial training, black-box attack settings, categorization, composability, compositionality, convolutional networks, Deep Learning, feature denoising, feature extraction, image classification, image classification systems, image denoising, Iterative methods, learning (artificial intelligence), Metrics, network architectures, nonlocal means, pattern classification, pubcrawl, Recognition: Detection, resilience, Resiliency, retrieval, security of data, White Box Security
Abstract

Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 -- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by 10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.

URLhttps://ieeexplore.ieee.org/document/8954372/
DOI10.1109/CVPR.2019.00059
Citation Keyxie_feature_2019