Adversarial Examples for Generative Models
Title | Adversarial Examples for Generative Models |
Publication Type | Conference Paper |
Year of Publication | 2018 |
Authors | Kos, J., Fischer, I., Song, D. |
Conference Name | 2018 IEEE Security and Privacy Workshops (SPW) |
Date Published | May 2018 |
Publisher | IEEE |
ISBN Number | 978-1-5386-8276-0 |
Keywords | adversarial example, adversarial examples, Adversary Models, classification-based adversaries, classifier, Data models, Decoding, deep generative models, deep learning architectures, Generative Models, Human Behavior, image classification, Image coding, Image reconstruction, image representation, input data distribution model, learning (artificial intelligence), machine learning, Metrics, neural net architecture, neural net architectures, pubcrawl, Receivers, Resiliency, Scalability, target generative model, target generative network, Training, vae, vae gan, VAE-GAN architecture attacks, variational techniques |
Abstract | We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network. |
URL | https://ieeexplore.ieee.org/document/8424630 |
DOI | 10.1109/SPW.2018.00014 |
Citation Key | kos_adversarial_2018 |
- learning (artificial intelligence)
- variational techniques
- VAE-GAN architecture attacks
- vae gan
- vae
- Training
- target generative network
- target generative model
- Scalability
- Resiliency
- Receivers
- pubcrawl
- neural net architectures
- neural net architecture
- Metrics
- machine learning
- adversarial example
- input data distribution model
- image representation
- Image reconstruction
- Image coding
- image classification
- Human behavior
- Generative Models
- deep learning architectures
- deep generative models
- Decoding
- Data models
- classifier
- classification-based adversaries
- Adversary Models
- adversarial examples