Visible to the public POSTER: Zero-Day Evasion Attack Analysis on Race Between Attack and Defense

TitlePOSTER: Zero-Day Evasion Attack Analysis on Race Between Attack and Defense
Publication TypeConference Paper
Year of Publication2018
AuthorsKwon, Hyun, Yoon, Hyunsoo, Choi, Daeseon
Conference NameProceedings of the 2018 on Asia Conference on Computer and Communications Security
Date PublishedMay 2018
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-5576-6
Keywordsadversarial example, Adversarial training, composability, deep neural network (dnn), defense, Metrics, pubcrawl, resilience, Resiliency, Zero day attacks, zero-day adversarial examples
Abstract

Deep neural networks (DNNs) exhibit excellent performance in machine learning tasks such as image recognition, pattern recognition, speech recognition, and intrusion detection. However, the usage of adversarial examples, which are intentionally corrupted by noise, can lead to misclassification. As adversarial examples are serious threats to DNNs, both adversarial attacks and methods of defending against adversarial examples have been continuously studied. Zero-day adversarial examples are created with new test data and are unknown to the classifier; hence, they represent a more significant threat to DNNs. To the best of our knowledge, there are no analytical studies in the literature of zero-day adversarial examples with a focus on attack and defense methods through experiments using several scenarios. Therefore, in this study, zero-day adversarial examples are practically analyzed with an emphasis on attack and defense methods through experiments using various scenarios composed of a fixed target model and an adaptive target model. The Carlini method was used for a state-of-the-art attack, while an adversarial training method was used as a typical defense method. We used the MNIST dataset and analyzed success rates of zero-day adversarial examples, average distortions, and recognition of original samples through several scenarios of fixed and adaptive target models. Experimental results demonstrate that changing the parameters of the target model in real time leads to resistance to adversarial examples in both the fixed and adaptive target models.

URLhttps://dl.acm.org/doi/10.1145/3196494.3201583
DOI10.1145/3196494.3201583
Citation Keykwon_poster:_2018