Visible to the public Biblio

Filters: Keyword is style transfer  [Clear All Filters]
2022-11-02
Zhao, Li, Jiao, Yan, Chen, Jie, Zhao, Ruixia.  2021.  Image Style Transfer Based on Generative Adversarial Network. 2021 International Conference on Computer Network, Electronic and Automation (ICCNEA). :191–195.
Image style transfer refers to the transformation of the style of image, so that the image details are retained to the maximum extent while the style is transferred. Aiming at the problem of low clarity of style transfer images generated by CycleGAN network, this paper improves the CycleGAN network. In this paper, the network model of auto-encoder and variational auto-encoder is added to the structure. The encoding part of the auto-encoder is used to extract image content features, and the variational auto-encoder is used to extract style features. At the same time, the generating network of the model in this paper uses first to adjust the image size and then perform the convolution operation to replace the traditional deconvolution operation. The discriminating network uses a multi-scale discriminator to force the samples generated by the generating network to be more realistic and approximate the target image, so as to improve the effect of image style transfer.
2022-03-09
Bo, Xihao, Jing, Xiaoyang, Yang, Xiaojian.  2021.  Style Transfer Analysis Based on Generative Adversarial Networks. 2021 IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI). :27—30.
Style transfer means using a neural network to extract the content of one image and the style of the other image. The two are combined to get the final result, broadly applied in social communication, animation production, entertainment items. Using style transfer, users can share and exchange images; painters can create specific art styles more readily with less creation cost and production time. Therefore, style transfer is widely concerned recently due to its various and valuable applications. In the past few years, the paper reviews style transfer and chooses three representative works to analyze in detail and contrast with each other, including StyleGAN, CycleGAN, and TL-GAN. Moreover, what function an ideal model of style transfer should realize is discussed. Compared with such a model, potential problems and prospects of different methods to achieve style transfer are listed. A couple of solutions to these drawbacks are given in the end.
Jia, Ning, Gong, Xiaoyi, Zhang, Qiao.  2021.  Improvement of Style Transfer Algorithm based on Neural Network. 2021 International Conference on Computer Engineering and Application (ICCEA). :1—6.
In recent years, the application of style transfer has become more and more widespread. Traditional deep learning-based style transfer networks often have problems such as image distortion, loss of detailed information, partial content disappearance, and transfer errors. The style transfer network based on deep learning that we propose in this article is aimed at dealing with these problems. Our method uses image edge information fusion and semantic segmentation technology to constrain the image structure before and after the migration, so that the converted image maintains structural consistency and integrity. We have verified that this method can successfully suppress image conversion distortion in most scenarios, and can generate good results.
Yuan, Honghui, Yanai, Keiji.  2021.  Multi-Style Transfer Generative Adversarial Network for Text Images. 2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR). :63—69.
In recent years, neural style transfer have shown impressive results in deep learning. In particular, for text style transfer, recent researches have successfully completed the transition from the text font domain to the text style domain. However, for text style transfer, multiple style transfer often requires learning many models, and generating multiple styles images of texts in a single model remains an unsolved problem. In this paper, we propose a multiple style transformation network for text style transfer, which can generate multiple styles of text images in a single model and control the style of texts in a simple way. The main idea is to add conditions to the transfer network so that all the styles can be trained effectively in the network, and to control the generation of each text style through the conditions. We also optimize the network so that the conditional information can be transmitted effectively in the network. The advantage of the proposed network is that multiple styles of text can be generated with only one model and that it is possible to control the generation of text styles. We have tested the proposed network on a large number of texts, and have demonstrated that it works well when generating multiple styles of text at the same time.
Wang, Yueming.  2021.  An Arbitrary Style Transfer Network based on Dual Attention Module. 2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). 4:1221—1226.
Arbitrary style transfer means that stylized images can be generated from a set of arbitrary input image pairs of content images and style images. Recent arbitrary style transfer algorithms lead to distortion of content or incompletion of style transfer because network need to make a balance between the content structure and style. In this paper, we introduce a dual attention network based on style attention and channel attention, which can flexibly transfer local styles, pay more attention to content structure, keep content structure intact and reduce unnecessary style transfer. Experimental results show that the network can synthesize high quality stylized images while maintaining real-time performance.
2021-02-01
Wu, L., Chen, X., Meng, L., Meng, X..  2020.  Multitask Adversarial Learning for Chinese Font Style Transfer. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Style transfer between Chinese fonts is challenging due to both the complexity of Chinese characters and the significant difference between fonts. Existing algorithms for this task typically learn a mapping between the reference and target fonts for each character. Subsequently, this mapping is used to generate the characters that do not exist in the target font. However, the characters available for training are unlikely to cover all fine-grained parts of the missing characters, leading to the overfitting problem. As a result, the generated characters of the target font may suffer problems of incomplete or even radicals and dirty dots. To address this problem, this paper presents a multi-task adversarial learning approach, termed MTfontGAN, to generate more vivid Chinese characters. MTfontGAN learns to transfer a reference font to multiple target ones simultaneously. An alignment is imposed on the encoders of different tasks to make them focus on the important parts of the characters in general style transfer. Such cross-task interactions at the feature level effectively improve the generalization capability of MTfontGAN. The performance of MTfontGAN is evaluated on three Chinese font datasets. Experimental results show that MTfontGAN outperforms the state-of-the-art algorithms in a single-task setting. More importantly, increasing the number of tasks leads to better performance in all of them.
Bai, Y., Guo, Y., Wei, J., Lu, L., Wang, R., Wang, Y..  2020.  Fake Generated Painting Detection Via Frequency Analysis. 2020 IEEE International Conference on Image Processing (ICIP). :1256–1260.
With the development of deep neural networks, digital fake paintings can be generated by various style transfer algorithms. To detect the fake generated paintings, we analyze the fake generated and real paintings in Fourier frequency domain and observe statistical differences and artifacts. Based on our observations, we propose Fake Generated Painting Detection via Frequency Analysis (FGPD-FA) by extracting three types of features in frequency domain. Besides, we also propose a digital fake painting detection database for assessing the proposed method. Experimental results demonstrate the excellence of the proposed method in different testing conditions.
2020-12-11
Peng, M., Wu, Q..  2019.  Enhanced Style Transfer in Real-Time with Histogram-Matched Instance Normalization. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :2001—2006.

Since the neural networks are utilized to extract information from an image, Gatys et al. found that they could separate the content and style of images and reconstruct them to another image which called Style Transfer. Moreover, there are many feed-forward neural networks have been suggested to speeding up the original method to make Style Transfer become practical application. However, this takes a price: these feed-forward networks are unchangeable because of their fixed parameters which mean we cannot transfer arbitrary styles but only single one in real-time. Some coordinated approaches have been offered to relieve this dilemma. Such as a style-swap layer and an adaptive normalization layer (AdaIN) and soon. Its worth mentioning that we observed that the AdaIN layer only aligns the means and variance of the content feature maps with those of the style feature maps. Our method is aimed at presenting an operational approach that enables arbitrary style transfer in real-time, reserving more statistical information by histogram matching, providing more reliable texture clarity and more humane user control. We achieve performance more cheerful than existing approaches without adding calculation, complexity. And the speed comparable to the fastest Style Transfer method. Our method provides more flexible user control and trustworthy quality and stability.

Vasiliu, V., Sörös, G..  2019.  Coherent Rendering of Virtual Smile Previews with Fast Neural Style Transfer. 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). :66—73.

Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.

Lee, P., Tseng, C..  2019.  On the Layer Choice of the Image Style Transfer Using Convolutional Neural Networks. 2019 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW). :1—2.

In this paper, the layer choices of the image style transfer method using the VGG-19 neural network are studied. The VGG-19 network is used to extract the feature maps which have their implicit meaning as a learning basis. If the layers for stylistic learning are not suitably chosen, the quality of style transferred image may not look good. After making experiments, it can be observed that the color information is concentrated on lower layers from conv1-1 to conv2-2, and texture information is concentrated on the middle layers from conv3-1 to conv4-4. As to the higher layers from conv5-1 to conv5-4, they seem to be able to depict image content well. Based on these observations, the methods of color transfer, texture transfer and style transfer are presented and make comparisons with conventional methods.

Huang, Y., Jing, M., Tang, H., Fan, Y., Xue, X., Zeng, X..  2019.  Real-Time Arbitrary Style Transfer with Convolution Neural Network. 2019 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA). :65—66.

Style transfer is a research hotspot in computer vision. Up to now, it is still a challenge although many researches have been conducted on it for high quality style transfer. In this work, we propose an algorithm named ASTCNN which is a real-time Arbitrary Style Transfer Convolution Neural Network. The ASTCNN consists of two independent encoders and a decoder. The encoders respectively extract style and content features from style and content and the decoder generates the style transferred image images. Experimental results show that ASTCNN achieves higher quality output image than the state-of-the-art style transfer algorithms and the floating point computation of ASTCNN is 23.3% less than theirs.

Cao, Y., Tang, Y..  2019.  Development of Real-Time Style Transfer for Video System. 2019 3rd International Conference on Circuits, System and Simulation (ICCSS). :183—187.

Re-drawing the image as a certain artistic style is considered to be a complicated task for computer machine. On the contrary, human can easily master the method to compose and describe the style between different images. In the past, many researchers studying on the deep neural networks had found an appropriate representation of the artistic style using perceptual loss and style reconstruction loss. In the previous works, Gatys et al. proposed an artificial system based on convolutional neural networks that creates artistic images of high perceptual quality. Whereas in terms of running speed, it was relatively time-consuming, thus it cannot apply to video style transfer. Recently, a feed-forward CNN approach has shown the potential of fast style transformation, which is an end-to-end system without hundreds of iteration while transferring. We combined the benefits of both approaches, optimized the feed-forward network and defined time loss function to make it possible to implement the style transfer on video in real time. In contrast to the past method, our method runs in real time with higher resolution while creating competitive visually pleasing and temporally consistent experimental results.

2020-12-07
Chang, R., Chang, C., Way, D., Shih, Z..  2018.  An improved style transfer approach for videos. 2018 International Workshop on Advanced Image Technology (IWAIT). :1–2.

In this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.

Reimann, M., Klingbeil, M., Pasewaldt, S., Semmo, A., Trapp, M., Döllner, J..  2018.  MaeSTrO: A Mobile App for Style Transfer Orchestration Using Neural Networks. 2018 International Conference on Cyberworlds (CW). :9–16.

Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design.

Jeong, T., Mandal, A..  2018.  Flexible Selecting of Style to Content Ratio in Neural Style Transfer. 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). :264–269.

Humans have created many pioneers of art from the beginning of time. There are not many notable achievements by an artificial intelligence to create something visually captivating in the field of art. However, some breakthroughs were made in the past few years by learning the differences between the content and style of an image using convolution neural networks and texture synthesis. But most of the approaches have the limitations on either processing time, choosing a certain style image or altering the weight ratio of style image. Therefore, we are to address these restrictions and provide a system which allows any style image selection with a user defined style weight ratio in minimum time possible.

2018-11-19
Chelaramani, S., Jha, A., Namboodiri, A. M..  2018.  Cross-Modal Style Transfer. 2018 25th IEEE International Conference on Image Processing (ICIP). :2157–2161.

We, humans, have the ability to easily imagine scenes that depict sentences such as ``Today is a beautiful sunny day'' or ``There is a Christmas feel, in the air''. While it is hard to precisely describe what one person may imagine, the essential high-level themes associated with such sentences largely remains the same. The ability to synthesize novel images that depict the feel of a sentence is very useful in a variety of applications such as education, advertisement, and entertainment. While existing papers tackle this problem given a style image, we aim to provide a far more intuitive and easy to use solution that synthesizes novel renditions of an existing image, conditioned on a given sentence. We present a method for cross-modal style transfer between an English sentence and an image, to produce a new image that imbibes the essential theme of the sentence. We do this by modifying the style transfer mechanism used in image style transfer to incorporate a style component derived from the given sentence. We demonstrate promising results using the YFCC100m dataset.

Chen, Y., Lai, Y., Liu, Y..  2017.  Transforming Photos to Comics Using Convolutional Neural Networks. 2017 IEEE International Conference on Image Processing (ICIP). :2010–2014.

In this paper, inspired by Gatys's recent work, we propose a novel approach that transforms photos to comics using deep convolutional neural networks (CNNs). While Gatys's method that uses a pre-trained VGG network generally works well for transferring artistic styles such as painting from a style image to a content image, for more minimalist styles such as comics, the method often fails to produce satisfactory results. To address this, we further introduce a dedicated comic style CNN, which is trained for classifying comic images and photos. This new network is effective in capturing various comic styles and thus helps to produce better comic stylization results. Even with a grayscale style image, Gatys's method can still produce colored output, which is not desirable for comics. We develop a modified optimization framework such that a grayscale image is guaranteed to be synthesized. To avoid converging to poor local minima, we further initialize the output image using grayscale version of the content image. Various examples show that our method synthesizes better comic images than the state-of-the-art method.

Shinya, A., Tung, N. D., Harada, T., Thawonmas, R..  2017.  Object-Specific Style Transfer Based on Feature Map Selection Using CNNs. 2017 Nicograph International (NicoInt). :88–88.

We propose a method for transferring an arbitrary style to only a specific object in an image. Style transfer is the process of combining the content of an image and the style of another image into a new image. Our results show that the proposed method can realize style transfer to specific object.