Title | Efficient Defenses Against Adversarial Attacks |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Authors | Zantedeschi, Valentina, Nicolae, Maria-Irina, Rawat, Ambrish |
Conference Name | Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-5202-4 |
Keywords | adversarial learning, Collaboration, cyber physical systems, Deep Neural Network, defenses, Metrics, model security, neural networks security, policy, policy-based governance, Policy-Governed Secure Collaboration, pubcrawl, resilience, Resiliency |
Abstract | Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of undermining a system. In the case of DNNs, the lack of better understanding of their working has prevented the development of efficient defenses. In this paper, we propose a new defense method based on practical observations which is easy to integrate into models and performs better than state-of-the-art defenses. Our proposed solution is meant to reinforce the structure of a DNN, making its prediction more stable and less likely to be fooled by adversarial samples. We conduct an extensive experimental study proving the efficiency of our method against multiple attacks, comparing it to numerous defenses, both in white-box and black-box setups. Additionally, the implementation of our method brings almost no overhead to the training procedure, while maintaining the prediction performance of the original model on clean samples. |
URL | http://doi.acm.org/10.1145/3128572.3140449 |
DOI | 10.1145/3128572.3140449 |
Citation Key | zantedeschi_efficient_2017 |