BlurNet: Defense by Filtering the Feature Maps
Title | BlurNet: Defense by Filtering the Feature Maps |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Raju, R. S., Lipasti, M. |
Conference Name | 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W) |
Date Published | July 2020 |
Publisher | IEEE |
ISBN Number | 978-1-7281-7263-7 |
Keywords | Adaptation models, adaptive attack evaluation, adaptive filtering, adversarial defense, adversarial images, Adversarial Machine Learning, Adversarial robustness, attack algorithms, black stickers, blackbox transfer attack, BlurNet, depthwise convolution layer, frequency analysis, gradient information, high frequency noise, image recognition, image restoration, input image, Kernel, layer feature maps, learning (artificial intelligence), low-pass filters, lowpass filtering behavior, malicious adversary, malicious examples, Metrics, neural nets, Neural networks, Perturbation methods, pubcrawl, resilience, Resiliency, robust physical perturbations, Robustness, RP, Scalability, security of data, standard blur kernels, standard-architecture traffic sign classifiers, Standards, stop signs, substitute model, targeted misclassification rates, traffic engineering computing, victim model, white stickers, white-box attacks |
Abstract | Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses. |
URL | https://ieeexplore.ieee.org/document/9151833 |
DOI | 10.1109/DSN-W50199.2020.00016 |
Citation Key | raju_blurnet_2020 |
- Scalability
- malicious examples
- Metrics
- neural nets
- Neural networks
- Perturbation methods
- pubcrawl
- resilience
- Resiliency
- robust physical perturbations
- Robustness
- RP
- malicious adversary
- security of data
- standard blur kernels
- standard-architecture traffic sign classifiers
- standards
- stop signs
- substitute model
- targeted misclassification rates
- traffic engineering computing
- victim model
- white stickers
- white-box attacks
- frequency analysis
- adaptive attack evaluation
- adaptive filtering
- adversarial defense
- adversarial images
- Adversarial Machine Learning
- Adversarial robustness
- attack algorithms
- black stickers
- blackbox transfer attack
- BlurNet
- depthwise convolution layer
- Adaptation models
- gradient information
- high frequency noise
- image recognition
- image restoration
- input image
- Kernel
- layer feature maps
- learning (artificial intelligence)
- low-pass filters
- lowpass filtering behavior