Visible to the public Defending Against Adversarial Attacks in Deep Learning with Robust Auxiliary Classifiers Utilizing Bit Plane Slicing

TitleDefending Against Adversarial Attacks in Deep Learning with Robust Auxiliary Classifiers Utilizing Bit Plane Slicing
Publication TypeConference Paper
Year of Publication2020
AuthorsLiu, Yuan, Zhou, Pingqiang
Conference Name2020 Asian Hardware Oriented Security and Trust Symposium (AsianHOST)
Date PublishedDec. 2020
PublisherIEEE
ISBN Number978-1-7281-8952-9
Keywordsadversarial defense, bit plane slicing, composability, convolution, Deep Learning, Hardware, Metrics, Neural networks, object oriented security, Perturbation methods, pubcrawl, Resiliency, security, security of neural networks
AbstractDeep Neural Networks (DNNs) have been widely used in variety of fields with great success. However, recent researches indicate that DNNs are susceptible to adversarial attacks, which can easily fool the well-trained DNNs without being detected by human eyes. In this paper, we propose to combine the target DNN model with robust bit plane classifiers to defend against adversarial attacks. It comes from our finding that successful attacks generate imperceptible perturbations, which mainly affects the low-order bits of pixel value in clean images. Hence, using bit planes instead of traditional RGB channels for convolution can effectively reduce channel modification rate. We conduct experiments on dataset CIFAR-10 and GTSRB. The results show that our defense method can effectively increase the model accuracy on average from 8.72% to 85.99% under attacks on CIFAR-10 without sacrificina accuracy of clean images.
URLhttps://ieeexplore.ieee.org/document/9358268
DOI10.1109/AsianHOST51057.2020.9358268
Citation Keyliu_defending_2020