Visible to the public Efficient Defenses Against Adversarial Attacks

TitleEfficient Defenses Against Adversarial Attacks
Publication TypeConference Paper
Year of Publication2017
AuthorsZantedeschi, Valentina, Nicolae, Maria-Irina, Rawat, Ambrish
Conference NameProceedings of the 10th ACM Workshop on Artificial Intelligence and Security
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-5202-4
Keywordsadversarial learning, Collaboration, cyber physical systems, Deep Neural Network, defenses, Metrics, model security, neural networks security, policy, policy-based governance, Policy-Governed Secure Collaboration, pubcrawl, resilience, Resiliency
AbstractFollowing the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of undermining a system. In the case of DNNs, the lack of better understanding of their working has prevented the development of efficient defenses. In this paper, we propose a new defense method based on practical observations which is easy to integrate into models and performs better than state-of-the-art defenses. Our proposed solution is meant to reinforce the structure of a DNN, making its prediction more stable and less likely to be fooled by adversarial samples. We conduct an extensive experimental study proving the efficiency of our method against multiple attacks, comparing it to numerous defenses, both in white-box and black-box setups. Additionally, the implementation of our method brings almost no overhead to the training procedure, while maintaining the prediction performance of the original model on clean samples.
URLhttp://doi.acm.org/10.1145/3128572.3140449
DOI10.1145/3128572.3140449
Citation Keyzantedeschi_efficient_2017