Visible to the public A General Framework for Adversarial Examples with ObjectivesConflict Detection Enabled

TitleA General Framework for Adversarial Examples with Objectives
Publication TypeJournal Article
Year of Publication2019
AuthorsMahmood Sharif, Sruti Bhagavatula, Lujo Bauer, Michael K. Reiter
JournalACM Transactions on Privacy and Security (TOPS)
Volume22
Issue3
Date Published06/2019
ISSN2471-2566
Other Numbersarticle 16
Keywords2019: July, adversarial examples, CMU, face recognition, machine learning, Metrics, Neural networks, Resilient Architectures, Safety Critical ML, Securing Safety-Critical Machine Learning Algorithms
Abstract

Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional objectives, which are typically enforced through custom modifications of the perturbation process. In this article, we propose adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives. We demonstrate the ability of AGNs to accommodate a wide range of objectives, including imprecise ones difficult to model, in two application domains. In particular, we demonstrate physical adversarial examples--eyeglass frames designed to fool face recognition--with better robustness, inconspicuousness, and scalability than previous approaches, as well as a new attack to fool a handwritten-digit classifier.

URLhttps://doi.org/10.1145/3317611
DOI10.1145/3317611
Citation Keynode-61478

Other available formats:

Sharif_Gen_framework_adv_examples_Bauer.pdf
AttachmentSize
bytes