Visible to the public Biblio

Filters: Keyword is arbitrary style transfer  [Clear All Filters]
2022-03-09
Shi, Di-Bo, Xie, Huan, Ji, Yi, Li, Ying, Liu, Chun-Ping.  2021.  Deep Content Guidance Network for Arbitrary Style Transfer. 2021 International Joint Conference on Neural Networks (IJCNN). :1—8.
Arbitrary style transfer refers to generate a new image based on any set of existing images. Meanwhile, the generated image retains the content structure of one and the style pattern of another. In terms of content retention and style transfer, the recent arbitrary style transfer algorithms normally perform well in one, but it is difficult to find a trade-off between the two. In this paper, we propose the Deep Content Guidance Network (DCGN) which is stacked by content guidance (CG) layers. And each CG layer involves one position self-attention (pSA) module, one channel self-attention (cSA) module and one content guidance attention (cGA) module. Specially, the pSA module extracts more effective content information on the spatial layout of content images and the cSA module makes the style representation of style images in the channel dimension richer. And in the non-local view, the cGA module utilizes content information to guide the distribution of style features, which obtains a more detailed style expression. Moreover, we introduce a new permutation loss to generalize feature expression, so as to obtain abundant feature expressions while maintaining content structure. Qualitative and quantitative experiments verify that our approach can transform into better stylized images than the state-of-the-art methods.
Wang, Yueming.  2021.  An Arbitrary Style Transfer Network based on Dual Attention Module. 2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). 4:1221—1226.
Arbitrary style transfer means that stylized images can be generated from a set of arbitrary input image pairs of content images and style images. Recent arbitrary style transfer algorithms lead to distortion of content or incompletion of style transfer because network need to make a balance between the content structure and style. In this paper, we introduce a dual attention network based on style attention and channel attention, which can flexibly transfer local styles, pay more attention to content structure, keep content structure intact and reduce unnecessary style transfer. Experimental results show that the network can synthesize high quality stylized images while maintaining real-time performance.
2020-12-11
Nguyen, A., Choi, S., Kim, W., Lee, S..  2019.  A Simple Way of Multimodal and Arbitrary Style Transfer. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :1752—1756.

We re-define multimodality and introduce a simple approach to multimodal and arbitrary style transfer. Conventionally, style transfer methods are limited to synthesizing a deterministic output based on a single style, and there has been no work that can generate multiple images of various details, or multimodality, given a single style. In this work, we explore a way to achieve multimodal and arbitrary style transfer by injecting noise to a unimodal method. This novel approach does not require any trainable parameters, and can be readily applied to any unimodal style transfer methods with separate style encoding sub-network in literature. Experimental results show that while being able to transfer an image to multiple domains in various ways, the image quality is highly competitive with contemporary models in style transfer.

2018-11-19
Huang, X., Belongie, S..  2017.  Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. 2017 IEEE International Conference on Computer Vision (ICCV). :1510–1519.

Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.