Multi-View Image Generation from a Single-View
Title | Multi-View Image Generation from a Single-View |
Publication Type | Conference Paper |
Year of Publication | 2018 |
Authors | Zhao, Bo, Wu, Xiao, Cheng, Zhi-Qi, Liu, Hao, Jie, Zequn, Feng, Jiashi |
Conference Name | Proceedings of the 26th ACM International Conference on Multimedia |
Publisher | ACM |
ISBN Number | 978-1-4503-5665-7 |
Keywords | Deep Learning, Generative Adversarial Learning, generative adversarial networks, image generation, Metrics, pubcrawl, Resiliency, Scalability |
Abstract | How to generate multi-view images with realistic-looking appearance from only a single view input is a challenging problem. In this paper, we attack this problem by proposing a novel image generation model termed VariGANs, which combines the merits of the variational inference and the Generative Adversarial Networks (GANs). It generates the target image in a coarse-to-fine manner instead of a single pass which suffers from severe artifacts. It first performs variational inference to model global appearance of the object (e.g., shape and color) and produces coarse images of different views. Conditioned on the generated coarse images, it then performs adversarial learning to fill details consistent with the input and generate the fine images. Extensive experiments conducted on two clothing datasets, MVC and DeepFashion, have demonstrated that the generated images with the proposed VariGANs are more plausible than those generated by existing approaches, which provide more consistent global appearance as well as richer and sharper details. |
URL | https://dl.acm.org/citation.cfm?doid=3240508.3240536 |
DOI | 10.1145/3240508.3240536 |
Citation Key | zhao_multi-view_2018 |