Visible to the public Transforming Photos to Comics Using Convolutional Neural Networks

TitleTransforming Photos to Comics Using Convolutional Neural Networks
Publication TypeConference Paper
Year of Publication2017
AuthorsChen, Y., Lai, Y., Liu, Y.
Conference Name2017 IEEE International Conference on Image Processing (ICIP)
Date PublishedSept. 2017
PublisherIEEE
ISBN Number978-1-5090-2175-8
KeywordsART, artistic styles, comic images classification, comic styles, comic stylization, comics, content image, convolution, convolutional neural networks, dedicated comic style CNN, deep convolutional neural networks, Deep Learning, feature extraction, feedforward neural nets, Gatys's method, Gatys's recent work, Gray-scale, grayscale image, grayscale style image, image classification, Image color analysis, image colour analysis, Image reconstruction, image texture, learning (artificial intelligence), Metrics, minimalist styles, Neural networks, neural style transfer, Optimization, pre-trained VGG network, pubcrawl, rendering (computer graphics), resilience, Resiliency, Scalability, Standards, style transfer, transforms photos
Abstract

In this paper, inspired by Gatys's recent work, we propose a novel approach that transforms photos to comics using deep convolutional neural networks (CNNs). While Gatys's method that uses a pre-trained VGG network generally works well for transferring artistic styles such as painting from a style image to a content image, for more minimalist styles such as comics, the method often fails to produce satisfactory results. To address this, we further introduce a dedicated comic style CNN, which is trained for classifying comic images and photos. This new network is effective in capturing various comic styles and thus helps to produce better comic stylization results. Even with a grayscale style image, Gatys's method can still produce colored output, which is not desirable for comics. We develop a modified optimization framework such that a grayscale image is guaranteed to be synthesized. To avoid converging to poor local minima, we further initialize the output image using grayscale version of the content image. Various examples show that our method synthesizes better comic images than the state-of-the-art method.

URLhttps://ieeexplore.ieee.org/document/8296634
DOI10.1109/ICIP.2017.8296634
Citation Keychen_transforming_2017