Visible to the public Biblio

Filters: Keyword is image generation  [Clear All Filters]
2019-05-01
Tsunashima, Hideki, Hoshi, Taisei, Chen, Qiu.  2018.  DzGAN: Improved Conditional Generative Adversarial Nets Using Divided Z-Vector. Proceedings of the 2018 International Conference on Computing and Big Data. :52-55.

Conditional Generative Adversarial Nets [1](cGAN) was recently proposed as a novel conditional learning method by feeding some extra information into the network. In this paper we propose an improved conditional GANs which use divided z-vector (DzGAN). The computation amount will be reduced because DzGAN can implement conditional learning using not images but one-hot vector by dividing the range of z-vector (e.g. -1\textasciitilde1 to -1\textasciitilde0 and 0\textasciitilde1). In the DzGAN, the discriminator is fed by the images with label using one-hot vector and the generator is fed by divided z-vector (e.g. there are 10 classes In MNIST dataset, the divided z-vector will be z1\textasciitildez10 accordingly) with corresponding label fed into the discriminator, thus we can implement conditional learning. In this paper we use conditional Deep Convolutional Generative Adversarial Networks (cDCGAN) [7] instead of cGAN because cDCGAN can generate clear image better than cGAN. Heuristic experiments of conditional learning which compare the computation amount demonstrate that DzGAN is superior than cDCGAN.

Tirupattur, Praveen, Rawat, Yogesh Singh, Spampinato, Concetto, Shah, Mubarak.  2018.  ThoughtViz: Visualizing Human Thoughts Using Generative Adversarial Network. Proceedings of the 26th ACM International Conference on Multimedia. :950-958.

Studying human brain signals has always gathered great attention from the scientific community. In Brain Computer Interface (BCI) research, for example, changes of brain signals in relation to specific tasks (e.g., thinking something) are detected and used to control machines. While extracting spatio-temporal cues from brain signals for classifying state of human mind is an explored path, decoding and visualizing brain states is new and futuristic. Following this latter direction, in this paper, we propose an approach that is able not only to read the mind, but also to decode and visualize human thoughts. More specifically, we analyze brain activity, recorded by an ElectroEncephaloGram (EEG), of a subject while thinking about a digit, character or an object and synthesize visually the thought item. To accomplish this, we leverage the recent progress of adversarial learning by devising a conditional Generative Adversarial Network (GAN), which takes, as input, encoded EEG signals and generates corresponding images. In addition, since collecting large EEG signals in not trivial, our GAN model allows for learning distributions with limited training data. Performance analysis carried out on three different datasets – brain signals of multiple subjects thinking digits, characters, and objects – show that our approach is able to effectively generate images from thoughts of a person. They also demonstrate that EEG signals encode explicitly cues from thoughts which can be effectively used for generating semantically relevant visualizations.

Zhao, Bo, Wu, Xiao, Cheng, Zhi-Qi, Liu, Hao, Jie, Zequn, Feng, Jiashi.  2018.  Multi-View Image Generation from a Single-View. Proceedings of the 26th ACM International Conference on Multimedia. :383-391.

How to generate multi-view images with realistic-looking appearance from only a single view input is a challenging problem. In this paper, we attack this problem by proposing a novel image generation model termed VariGANs, which combines the merits of the variational inference and the Generative Adversarial Networks (GANs). It generates the target image in a coarse-to-fine manner instead of a single pass which suffers from severe artifacts. It first performs variational inference to model global appearance of the object (e.g., shape and color) and produces coarse images of different views. Conditioned on the generated coarse images, it then performs adversarial learning to fill details consistent with the input and generate the fine images. Extensive experiments conducted on two clothing datasets, MVC and DeepFashion, have demonstrated that the generated images with the proposed VariGANs are more plausible than those generated by existing approaches, which provide more consistent global appearance as well as richer and sharper details.

2019-02-22
Hu, D., Wang, L., Jiang, W., Zheng, S., Li, B..  2018.  A Novel Image Steganography Method via Deep Convolutional Generative Adversarial Networks. IEEE Access. 6:38303-38314.

The security of image steganography is an important basis for evaluating steganography algorithms. Steganography has recently made great progress in the long-term confrontation with steganalysis. To improve the security of image steganography, steganography must have the ability to resist detection by steganalysis algorithms. Traditional embedding-based steganography embeds the secret information into the content of an image, which unavoidably leaves a trace of the modification that can be detected by increasingly advanced machine-learning-based steganalysis algorithms. The concept of steganography without embedding (SWE), which does not need to modify the data of the carrier image, appeared to overcome the detection of machine-learning-based steganalysis algorithms. In this paper, we propose a novel image SWE method based on deep convolutional generative adversarial networks. We map the secret information into a noise vector and use the trained generator neural network model to generate the carrier image based on the noise vector. No modification or embedding operations are required during the process of image generation, and the information contained in the image can be extracted successfully by another neural network, called the extractor, after training. The experimental results show that this method has the advantages of highly accurate information extraction and a strong ability to resist detection by state-of-the-art image steganalysis algorithms.

2018-11-19
Zhao, Yiru, Deng, Bing, Huang, Jianqiang, Lu, Hongtao, Hua, Xian-Sheng.  2017.  Stylized Adversarial AutoEncoder for Image Generation. Proceedings of the 25th ACM International Conference on Multimedia. :244–251.

In this paper, we propose an autoencoder-based generative adversarial network (GAN) for automatic image generation, which is called "stylized adversarial autoencoder". Different from existing generative autoencoders which typically impose a prior distribution over the latent vector, the proposed approach splits the latent variable into two components: style feature and content feature, both encoded from real images. The split of the latent vector enables us adjusting the content and the style of the generated image arbitrarily by choosing different exemplary images. In addition, a multiclass classifier is adopted in the GAN network as the discriminator, which makes the generated images more realistic. We performed experiments on hand-writing digits, scene text and face datasets, in which the stylized adversarial autoencoder achieves superior results for image generation as well as remarkably improves the corresponding supervised recognition task.