Visible to the public Biblio

Filters: Keyword is style transferred image  [Clear All Filters]
2020-12-11
Lee, P., Tseng, C..  2019.  On the Layer Choice of the Image Style Transfer Using Convolutional Neural Networks. 2019 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW). :1—2.

In this paper, the layer choices of the image style transfer method using the VGG-19 neural network are studied. The VGG-19 network is used to extract the feature maps which have their implicit meaning as a learning basis. If the layers for stylistic learning are not suitably chosen, the quality of style transferred image may not look good. After making experiments, it can be observed that the color information is concentrated on lower layers from conv1-1 to conv2-2, and texture information is concentrated on the middle layers from conv3-1 to conv4-4. As to the higher layers from conv5-1 to conv5-4, they seem to be able to depict image content well. Based on these observations, the methods of color transfer, texture transfer and style transfer are presented and make comparisons with conventional methods.

Huang, Y., Jing, M., Tang, H., Fan, Y., Xue, X., Zeng, X..  2019.  Real-Time Arbitrary Style Transfer with Convolution Neural Network. 2019 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA). :65—66.

Style transfer is a research hotspot in computer vision. Up to now, it is still a challenge although many researches have been conducted on it for high quality style transfer. In this work, we propose an algorithm named ASTCNN which is a real-time Arbitrary Style Transfer Convolution Neural Network. The ASTCNN consists of two independent encoders and a decoder. The encoders respectively extract style and content features from style and content and the decoder generates the style transferred image images. Experimental results show that ASTCNN achieves higher quality output image than the state-of-the-art style transfer algorithms and the floating point computation of ASTCNN is 23.3% less than theirs.