Biblio
Since the neural networks are utilized to extract information from an image, Gatys et al. found that they could separate the content and style of images and reconstruct them to another image which called Style Transfer. Moreover, there are many feed-forward neural networks have been suggested to speeding up the original method to make Style Transfer become practical application. However, this takes a price: these feed-forward networks are unchangeable because of their fixed parameters which mean we cannot transfer arbitrary styles but only single one in real-time. Some coordinated approaches have been offered to relieve this dilemma. Such as a style-swap layer and an adaptive normalization layer (AdaIN) and soon. Its worth mentioning that we observed that the AdaIN layer only aligns the means and variance of the content feature maps with those of the style feature maps. Our method is aimed at presenting an operational approach that enables arbitrary style transfer in real-time, reserving more statistical information by histogram matching, providing more reliable texture clarity and more humane user control. We achieve performance more cheerful than existing approaches without adding calculation, complexity. And the speed comparable to the fastest Style Transfer method. Our method provides more flexible user control and trustworthy quality and stability.
Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.
In this paper, the layer choices of the image style transfer method using the VGG-19 neural network are studied. The VGG-19 network is used to extract the feature maps which have their implicit meaning as a learning basis. If the layers for stylistic learning are not suitably chosen, the quality of style transferred image may not look good. After making experiments, it can be observed that the color information is concentrated on lower layers from conv1-1 to conv2-2, and texture information is concentrated on the middle layers from conv3-1 to conv4-4. As to the higher layers from conv5-1 to conv5-4, they seem to be able to depict image content well. Based on these observations, the methods of color transfer, texture transfer and style transfer are presented and make comparisons with conventional methods.
Style transfer is a research hotspot in computer vision. Up to now, it is still a challenge although many researches have been conducted on it for high quality style transfer. In this work, we propose an algorithm named ASTCNN which is a real-time Arbitrary Style Transfer Convolution Neural Network. The ASTCNN consists of two independent encoders and a decoder. The encoders respectively extract style and content features from style and content and the decoder generates the style transferred image images. Experimental results show that ASTCNN achieves higher quality output image than the state-of-the-art style transfer algorithms and the floating point computation of ASTCNN is 23.3% less than theirs.
Re-drawing the image as a certain artistic style is considered to be a complicated task for computer machine. On the contrary, human can easily master the method to compose and describe the style between different images. In the past, many researchers studying on the deep neural networks had found an appropriate representation of the artistic style using perceptual loss and style reconstruction loss. In the previous works, Gatys et al. proposed an artificial system based on convolutional neural networks that creates artistic images of high perceptual quality. Whereas in terms of running speed, it was relatively time-consuming, thus it cannot apply to video style transfer. Recently, a feed-forward CNN approach has shown the potential of fast style transformation, which is an end-to-end system without hundreds of iteration while transferring. We combined the benefits of both approaches, optimized the feed-forward network and defined time loss function to make it possible to implement the style transfer on video in real time. In contrast to the past method, our method runs in real time with higher resolution while creating competitive visually pleasing and temporally consistent experimental results.
In this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.
Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design.
Humans have created many pioneers of art from the beginning of time. There are not many notable achievements by an artificial intelligence to create something visually captivating in the field of art. However, some breakthroughs were made in the past few years by learning the differences between the content and style of an image using convolution neural networks and texture synthesis. But most of the approaches have the limitations on either processing time, choosing a certain style image or altering the weight ratio of style image. Therefore, we are to address these restrictions and provide a system which allows any style image selection with a user defined style weight ratio in minimum time possible.
We, humans, have the ability to easily imagine scenes that depict sentences such as ``Today is a beautiful sunny day'' or ``There is a Christmas feel, in the air''. While it is hard to precisely describe what one person may imagine, the essential high-level themes associated with such sentences largely remains the same. The ability to synthesize novel images that depict the feel of a sentence is very useful in a variety of applications such as education, advertisement, and entertainment. While existing papers tackle this problem given a style image, we aim to provide a far more intuitive and easy to use solution that synthesizes novel renditions of an existing image, conditioned on a given sentence. We present a method for cross-modal style transfer between an English sentence and an image, to produce a new image that imbibes the essential theme of the sentence. We do this by modifying the style transfer mechanism used in image style transfer to incorporate a style component derived from the given sentence. We demonstrate promising results using the YFCC100m dataset.
In this paper, inspired by Gatys's recent work, we propose a novel approach that transforms photos to comics using deep convolutional neural networks (CNNs). While Gatys's method that uses a pre-trained VGG network generally works well for transferring artistic styles such as painting from a style image to a content image, for more minimalist styles such as comics, the method often fails to produce satisfactory results. To address this, we further introduce a dedicated comic style CNN, which is trained for classifying comic images and photos. This new network is effective in capturing various comic styles and thus helps to produce better comic stylization results. Even with a grayscale style image, Gatys's method can still produce colored output, which is not desirable for comics. We develop a modified optimization framework such that a grayscale image is guaranteed to be synthesized. To avoid converging to poor local minima, we further initialize the output image using grayscale version of the content image. Various examples show that our method synthesizes better comic images than the state-of-the-art method.
We propose a method for transferring an arbitrary style to only a specific object in an image. Style transfer is the process of combining the content of an image and the style of another image into a new image. Our results show that the proposed method can realize style transfer to specific object.