An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks
Title | An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks |
Publication Type | Conference Paper |
Year of Publication | 2018 |
Authors | Zhao, Pu, Liu, Sijia, Wang, Yanzhi, Lin, Xue |
Conference Name | Proceedings of the 26th ACM International Conference on Multimedia |
Date Published | October 2018 |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-5665-7 |
Keywords | admm (alternating direction method of multipliers), adversarial attacks, Artificial neural networks, Collaboration, cyber physical systems, deep neural networks, Metrics, policy-based governance, pubcrawl, Resiliency |
Abstract | Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels. In a successful adversarial attack, the targeted mis-classification should be achieved with the minimal distortion added. In the literature, the added distortions are usually measured by \$L\_0\$, \$L\_1\$, \$L\_2\$, and \$L\_$\backslash$infty \$ norms, namely, L\_0, L\_1, L\_2, and L\_$infty$ attacks, respectively. However, there lacks a versatile framework for all types of adversarial attacks. This work for the first time unifies the methods of generating adversarial examples by leveraging ADMM (Alternating Direction Method of Multipliers), an operator splitting optimization approach, such that \$L\_0\$, \$L\_1\$, \$L\_2\$, and \$L\_$\backslash$infty \$ attacks can be effectively implemented by this general framework with little modifications. Comparing with the state-of-the-art attacks in each category, our ADMM-based attacks are so far the strongest, achieving both the 100% attack success rate and the minimal distortion. |
URL | https://dl.acm.org/doi/10.1145/3240508.3240639 |
DOI | 10.1145/3240508.3240639 |
Citation Key | zhao_admm-based_2018 |