Visible to the public Stylized Adversarial AutoEncoder for Image Generation

TitleStylized Adversarial AutoEncoder for Image Generation
Publication TypeConference Paper
Year of Publication2017
AuthorsZhao, Yiru, Deng, Bing, Huang, Jianqiang, Lu, Hongtao, Hua, Xian-Sheng
Conference NameProceedings of the 25th ACM International Conference on Multimedia
PublisherACM
Conference LocationNew York, NY, USA
ISBN Number978-1-4503-4906-2
Keywordsautoencoder, Generative Adversarial Learning, generative adversarial network, image generation, Metrics, pubcrawl, resilience, Resiliency, Scalability
Abstract

In this paper, we propose an autoencoder-based generative adversarial network (GAN) for automatic image generation, which is called "stylized adversarial autoencoder". Different from existing generative autoencoders which typically impose a prior distribution over the latent vector, the proposed approach splits the latent variable into two components: style feature and content feature, both encoded from real images. The split of the latent vector enables us adjusting the content and the style of the generated image arbitrarily by choosing different exemplary images. In addition, a multiclass classifier is adopted in the GAN network as the discriminator, which makes the generated images more realistic. We performed experiments on hand-writing digits, scene text and face datasets, in which the stylized adversarial autoencoder achieves superior results for image generation as well as remarkably improves the corresponding supervised recognition task.

URLhttps://dl.acm.org/citation.cfm?doid=3123266.3123450
DOI10.1145/3123266.3123450
Citation Keyzhao_stylized_2017