Visible to the public Image Style Transfer with Multi-target Loss for loT Applications

TitleImage Style Transfer with Multi-target Loss for loT Applications
Publication TypeConference Paper
Year of Publication2018
AuthorsWang, C., He, M.
Conference Name2018 15th International Symposium on Pervasive Systems, Algorithms and Networks (I-SPAN)
Date Publishedoct
Keywordsarbitrary images, artistic images, Computer vision, content style, context image, feature extraction, feature map, Filter banks, fundamental problem, gradient methods, Image reconstruction, image style transfer, Information filters, input images, input style image, learning (artificial intelligence), loss function, multitarget loss, neural nets, neural style transfer, output image, pre-trained deep convolutional neural network VGG19, Predictive Metrics, pubcrawl, Resiliency, Scalability, visualization
Abstract

Transferring the style of an image is a fundamental problem in computer vision. Which extracts the features of a context image and a style image, then fixes them to produce a new image with features of the both two input images. In this paper, we introduce an artificial system to separate and recombine the content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. We use a pre-trained deep convolutional neural network VGG19 to extract the feature map of the input style image and context image. Then we define a loss function that captures the difference between the output image and the two input images. We use the gradient descent algorithm to update the output image to minimize the loss function. Experiment results show the feasibility of the method.

DOI10.1109/I-SPAN.2018.00057
Citation Keywang_image_2018