POSTER: Zero-Day Evasion Attack Analysis on Race Between Attack and Defense
Title | POSTER: Zero-Day Evasion Attack Analysis on Race Between Attack and Defense |
Publication Type | Conference Paper |
Year of Publication | 2018 |
Authors | Kwon, Hyun, Yoon, Hyunsoo, Choi, Daeseon |
Conference Name | Proceedings of the 2018 on Asia Conference on Computer and Communications Security |
Date Published | May 2018 |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-5576-6 |
Keywords | adversarial example, Adversarial training, composability, deep neural network (dnn), defense, Metrics, pubcrawl, resilience, Resiliency, Zero day attacks, zero-day adversarial examples |
Abstract | Deep neural networks (DNNs) exhibit excellent performance in machine learning tasks such as image recognition, pattern recognition, speech recognition, and intrusion detection. However, the usage of adversarial examples, which are intentionally corrupted by noise, can lead to misclassification. As adversarial examples are serious threats to DNNs, both adversarial attacks and methods of defending against adversarial examples have been continuously studied. Zero-day adversarial examples are created with new test data and are unknown to the classifier; hence, they represent a more significant threat to DNNs. To the best of our knowledge, there are no analytical studies in the literature of zero-day adversarial examples with a focus on attack and defense methods through experiments using several scenarios. Therefore, in this study, zero-day adversarial examples are practically analyzed with an emphasis on attack and defense methods through experiments using various scenarios composed of a fixed target model and an adaptive target model. The Carlini method was used for a state-of-the-art attack, while an adversarial training method was used as a typical defense method. We used the MNIST dataset and analyzed success rates of zero-day adversarial examples, average distortions, and recognition of original samples through several scenarios of fixed and adaptive target models. Experimental results demonstrate that changing the parameters of the target model in real time leads to resistance to adversarial examples in both the fixed and adaptive target models. |
URL | https://dl.acm.org/doi/10.1145/3196494.3201583 |
DOI | 10.1145/3196494.3201583 |
Citation Key | kwon_poster:_2018 |