Biblio
Image style transfer is an increasingly interesting topic in computer vision where the goal is to map images from one style to another. In this paper, we propose a new framework called Combined Layer GAN as a solution of dealing with image style transfer problem. Specifically, the edge-constraint and color-constraint are proposed and explored in the GAN based image translation method to improve the performance. The motivation of the work is that color and edge are fundamental vision factors for an image, while in the traditional deep network based approach, there is a lack of fine control of these factors in the process of translation and the performance is degraded consequently. Our experiments and evaluations show that our novel method with the edge and color constrains is more stable, and significantly improves the performance compared with the traditional methods.
We re-define multimodality and introduce a simple approach to multimodal and arbitrary style transfer. Conventionally, style transfer methods are limited to synthesizing a deterministic output based on a single style, and there has been no work that can generate multiple images of various details, or multimodality, given a single style. In this work, we explore a way to achieve multimodal and arbitrary style transfer by injecting noise to a unimodal method. This novel approach does not require any trainable parameters, and can be readily applied to any unimodal style transfer methods with separate style encoding sub-network in literature. Experimental results show that while being able to transfer an image to multiple domains in various ways, the image quality is highly competitive with contemporary models in style transfer.
Since Gatys et al. proved that the convolution neural network (CNN) can be used to generate new images with artistic styles by separating and recombining the styles and contents of images. Neural Style Transfer has attracted wide attention of computer vision researchers. This paper aims to provide an overview of the style transfer application deep learning network development process, and introduces the classical style migration model, on the basis of the research on the migration of style of the deep learning network for collecting and organizing, and put forward related to gathered during the investigation of the problem solution, finally some classical model in the image style to display and compare the results of migration.
Transferring the style of an image is a fundamental problem in computer vision. Which extracts the features of a context image and a style image, then fixes them to produce a new image with features of the both two input images. In this paper, we introduce an artificial system to separate and recombine the content and style of arbitrary images, providing a neural algorithm for the creation of artistic images. We use a pre-trained deep convolutional neural network VGG19 to extract the feature map of the input style image and context image. Then we define a loss function that captures the difference between the output image and the two input images. We use the gradient descent algorithm to update the output image to minimize the loss function. Experiment results show the feasibility of the method.
``Style transfer'' among images has recently emerged as a very active research topic, fuelled by the power of convolution neural networks (CNNs), and has become fast a very popular technology in social media. This paper investigates the analogous problem in the audio domain: How to transfer the style of a reference audio signal to a target audio content? We propose a flexible framework for the task, which uses a sound texture model to extract statistics characterizing the reference audio style, followed by an optimization-based audio texture synthesis to modify the target content. In contrast to mainstream optimization-based visual transfer method, the proposed process is initialized by the target content instead of random noise and the optimized loss is only about texture, not structure. These differences proved key for audio style transfer in our experiments. In order to extract features of interest, we investigate different architectures, whether pre-trained on other tasks, as done in image style transfer, or engineered based on the human auditory system. Experimental results on different types of audio signal confirm the potential of the proposed approach.
In this paper, we propose to impose a multiscale contextual loss for image style transfer based on Convolutional Neural Networks (CNN). In the traditional optimization framework, a new stylized image is synthesized by constraining the high-level CNN features similar to a content image and the lower-level CNN features similar to a style image, which, however, appears to lost many details of the content image, presenting unpleasing and inconsistent distortions or artifacts. The proposed multiscale contextual loss, named Haar loss, is responsible for preserving the lost details by dint of matching the features derived from the content image and the synthesized image via wavelet transform. It endows the synthesized image with the characteristic to better retain the semantic information of the content image. More specifically, the unpleasant distortions can be effectively alleviated while the style can be well preserved. In the experiments, we show the visually more consistent and simultaneously well-stylized images generated by incorporating the multiscale contextual loss.