Stylized Adversarial AutoEncoder for Image Generation
Title | Stylized Adversarial AutoEncoder for Image Generation |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Authors | Zhao, Yiru, Deng, Bing, Huang, Jianqiang, Lu, Hongtao, Hua, Xian-Sheng |
Conference Name | Proceedings of the 25th ACM International Conference on Multimedia |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-4906-2 |
Keywords | autoencoder, Generative Adversarial Learning, generative adversarial network, image generation, Metrics, pubcrawl, resilience, Resiliency, Scalability |
Abstract | In this paper, we propose an autoencoder-based generative adversarial network (GAN) for automatic image generation, which is called "stylized adversarial autoencoder". Different from existing generative autoencoders which typically impose a prior distribution over the latent vector, the proposed approach splits the latent variable into two components: style feature and content feature, both encoded from real images. The split of the latent vector enables us adjusting the content and the style of the generated image arbitrarily by choosing different exemplary images. In addition, a multiclass classifier is adopted in the GAN network as the discriminator, which makes the generated images more realistic. We performed experiments on hand-writing digits, scene text and face datasets, in which the stylized adversarial autoencoder achieves superior results for image generation as well as remarkably improves the corresponding supervised recognition task. |
URL | https://dl.acm.org/citation.cfm?doid=3123266.3123450 |
DOI | 10.1145/3123266.3123450 |
Citation Key | zhao_stylized_2017 |