Visible to the public Biblio

Filters: Keyword is neural style transfer  [Clear All Filters]
2022-03-09
Kline, Timothy L..  2021.  Improving Domain Generalization in Segmentation Models with Neural Style Transfer. 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). :1324—1328.
Generalizing automated medical image segmentation methods to new image domains is inherently difficult. We have previously developed a number of automated segmentation methods that perform at the level of human readers on images acquired under similar conditions to the original training data. We are interested in exploring techniques that will improve model generalization to new imaging domains. In this study we explore a method to limit the inherent bias of these models to intensity and textural information. Using a dataset of 100 T2-weighted MR images with fat-saturation, and 100 T2-weighted MR images without fat-saturation, we explore the use of neural style transfer to induce shape preference and improve model performance on the task of segmenting the kidneys in patients affected by polycystic kidney disease. We find that using neural style transfer images improves the average dice value by 0.2. In addition, visualizing individual network kernel responses highlights a drastic difference in the optimized networks. Biasing models to invoke shape preference is a promising approach to create methods that are more closely aligned with human perception.
Bo, Xihao, Jing, Xiaoyang, Yang, Xiaojian.  2021.  Style Transfer Analysis Based on Generative Adversarial Networks. 2021 IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI). :27—30.
Style transfer means using a neural network to extract the content of one image and the style of the other image. The two are combined to get the final result, broadly applied in social communication, animation production, entertainment items. Using style transfer, users can share and exchange images; painters can create specific art styles more readily with less creation cost and production time. Therefore, style transfer is widely concerned recently due to its various and valuable applications. In the past few years, the paper reviews style transfer and chooses three representative works to analyze in detail and contrast with each other, including StyleGAN, CycleGAN, and TL-GAN. Moreover, what function an ideal model of style transfer should realize is discussed. Compared with such a model, potential problems and prospects of different methods to achieve style transfer are listed. A couple of solutions to these drawbacks are given in the end.
Shi, Di-Bo, Xie, Huan, Ji, Yi, Li, Ying, Liu, Chun-Ping.  2021.  Deep Content Guidance Network for Arbitrary Style Transfer. 2021 International Joint Conference on Neural Networks (IJCNN). :1—8.
Arbitrary style transfer refers to generate a new image based on any set of existing images. Meanwhile, the generated image retains the content structure of one and the style pattern of another. In terms of content retention and style transfer, the recent arbitrary style transfer algorithms normally perform well in one, but it is difficult to find a trade-off between the two. In this paper, we propose the Deep Content Guidance Network (DCGN) which is stacked by content guidance (CG) layers. And each CG layer involves one position self-attention (pSA) module, one channel self-attention (cSA) module and one content guidance attention (cGA) module. Specially, the pSA module extracts more effective content information on the spatial layout of content images and the cSA module makes the style representation of style images in the channel dimension richer. And in the non-local view, the cGA module utilizes content information to guide the distribution of style features, which obtains a more detailed style expression. Moreover, we introduce a new permutation loss to generalize feature expression, so as to obtain abundant feature expressions while maintaining content structure. Qualitative and quantitative experiments verify that our approach can transform into better stylized images than the state-of-the-art methods.
Kavitha, S., Dhanapriya, B., Vignesh, G. Naveen, Baskaran, K.R..  2021.  Neural Style Transfer Using VGG19 and Alexnet. 2021 International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation (ICAECA). :1—6.
Art is the perfect way for people to express their emotions in a way that words are unable to do. By simply looking at art, we can understand a person’s creativity and thoughts. In former times, artists spent a great deal of time creating an image of varied styles. In the current deep learning era, we are able to create images of different styles as we prefer within a short period of time. Neural style transfer is the most popular and widely used deep learning application that applies the desired style to the content image, which in turn generates an output image that is a combination of both style and the content of the original image. In this paper we have implemented the neural style transfer model with two architectures namely Vgg19 and Alexnet. This paper compares the output-styled image and the total loss obtained through VGG19 and Alexnet architectures. In addition, three different activation functions are used to compare quality and total loss of output styled images within Alexnet architectures.
Peng, Cheng, Xu, Chenning, Zhu, Yincheng.  2021.  Analysis of Neural Style Transfer Based on Generative Adversarial Network. 2021 IEEE International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (CEI). :189—192.
The goal of neural style transfer is to transform images by the deep learning method, such as changing oil paintings into sketch-style images. The Generative Adversarial Network (GAN) has made remarkable achievements in neural style transfer in recent years. At first, this paper introduces three typical neural style transfer methods, including StyleGAN, StarGAN, and Transparent Latent GAN (TL-GAN). Then, we discuss the advantages and disadvantages of these models, including the quality of the feature axis, the scale, and the model's interpretability. In addition, as the core of this paper, we put forward innovative improvements to the above models, including how to fully exploit the advantages of the above three models to derive a better style conversion model.
Jia, Ning, Gong, Xiaoyi, Zhang, Qiao.  2021.  Improvement of Style Transfer Algorithm based on Neural Network. 2021 International Conference on Computer Engineering and Application (ICCEA). :1—6.
In recent years, the application of style transfer has become more and more widespread. Traditional deep learning-based style transfer networks often have problems such as image distortion, loss of detailed information, partial content disappearance, and transfer errors. The style transfer network based on deep learning that we propose in this article is aimed at dealing with these problems. Our method uses image edge information fusion and semantic segmentation technology to constrain the image structure before and after the migration, so that the converted image maintains structural consistency and integrity. We have verified that this method can successfully suppress image conversion distortion in most scenarios, and can generate good results.
Park, Byung H., Chattopadhyay, Somrita, Burgin, John.  2021.  Haze Mitigation in High-Resolution Satellite Imagery Using Enhanced Style-Transfer Neural Network and Normalization Across Multiple GPUs. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. :2827—2830.
Despite recent advances in deep learning approaches, haze mitigation in large satellite images is still a challenging problem. Due to amorphous nature of haze, object detection or image segmentation approaches are not applicable. Also it is practically infeasible to obtain ground truths for training. Bounded memory capacity of GPUs is another constraint that limits the size of image to be processed. In this paper, we propose a style transfer based neural network approach to mitigate haze in a large overhead imagery. The network is trained without paired ground truths; further, perception loss is added to restore vivid colors, enhance contrast and minimize artifacts. The paper also illustrates our use of multiple GPUs in a collective way to produce a single coherent clear image where each GPU dehazes different portions of a large hazy image.
Yuan, Honghui, Yanai, Keiji.  2021.  Multi-Style Transfer Generative Adversarial Network for Text Images. 2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR). :63—69.
In recent years, neural style transfer have shown impressive results in deep learning. In particular, for text style transfer, recent researches have successfully completed the transition from the text font domain to the text style domain. However, for text style transfer, multiple style transfer often requires learning many models, and generating multiple styles images of texts in a single model remains an unsolved problem. In this paper, we propose a multiple style transformation network for text style transfer, which can generate multiple styles of text images in a single model and control the style of texts in a simple way. The main idea is to add conditions to the transfer network so that all the styles can be trained effectively in the network, and to control the generation of each text style through the conditions. We also optimize the network so that the conditional information can be transmitted effectively in the network. The advantage of the proposed network is that multiple styles of text can be generated with only one model and that it is possible to control the generation of text styles. We have tested the proposed network on a large number of texts, and have demonstrated that it works well when generating multiple styles of text at the same time.
Gong, Peiyong, Zheng, Kai, Jiang, Yi, Liu, Jia.  2021.  Water Surface Object Detection Based on Neural Style Learning Algorithm. 2021 40th Chinese Control Conference (CCC). :8539—8543.
In order to detect the objects on the water surface, a neural style learning algorithm is proposed in this paper. The algorithm uses the Gram matrix of a pre-trained convolutional neural network to represent the style of the texture in the image, which is originally used for image style transfer. The objects on the water surface can be easily distinguished by the difference in their styles of the image texture. The algorithm is tested on the dataset of the Airbus Ship Detection Challenge on Kaggle. Compared to the other water surface object detection algorithms, the proposed algorithm has a good precision of 0.925 with recall equals to 0.86.
Wang, Yueming.  2021.  An Arbitrary Style Transfer Network based on Dual Attention Module. 2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC). 4:1221—1226.
Arbitrary style transfer means that stylized images can be generated from a set of arbitrary input image pairs of content images and style images. Recent arbitrary style transfer algorithms lead to distortion of content or incompletion of style transfer because network need to make a balance between the content structure and style. In this paper, we introduce a dual attention network based on style attention and channel attention, which can flexibly transfer local styles, pay more attention to content structure, keep content structure intact and reduce unnecessary style transfer. Experimental results show that the network can synthesize high quality stylized images while maintaining real-time performance.
2021-02-01
Yeh, M., Tang, S., Bhattad, A., Zou, C., Forsyth, D..  2020.  Improving Style Transfer with Calibrated Metrics. 2020 IEEE Winter Conference on Applications of Computer Vision (WACV). :3149–3157.
Style transfer produces a transferred image which is a rendering of a content image in the manner of a style image. We seek to understand how to improve style transfer.To do so requires quantitative evaluation procedures, but current evaluation is qualitative, mostly involving user studies. We describe a novel quantitative evaluation procedure. Our procedure relies on two statistics: the Effectiveness (E) statistic measures the extent that a given style has been transferred to the target, and the Coherence (C) statistic measures the extent to which the original image's content is preserved. Our statistics are calibrated to human preference: targets with larger values of E and C will reliably be preferred by human subjects in comparisons of style and content, respectively.We use these statistics to investigate relative performance of a number of Neural Style Transfer (NST) methods, revealing a number of intriguing properties. Admissible methods lie on a Pareto frontier (i.e. improving E reduces C, or vice versa). Three methods are admissible: Universal style transfer produces very good C but weak E; modifying the optimization used for Gatys' loss produces a method with strong E and strong C; and a modified cross-layer method has slightly better E at strong cost in C. While the histogram loss improves the E statistics of Gatys' method, it does not make the method admissible. Surprisingly, style weights have relatively little effect in improving EC scores, and most variability in transfer is explained by the style itself (meaning experimenters can be misguided by selecting styles). Our GitHub Link is available1.
Rathi, P., Adarsh, P., Kumar, M..  2020.  Deep Learning Approach for Arbitrary Image Style Fusion and Transformation using SANET model. 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184). :1049–1057.
For real-time applications of arbitrary style transformation, there is a trade-off between the quality of results and the running time of existing algorithms. Hence, it is required to maintain the equilibrium of the quality of generated artwork with the speed of execution. It's complicated for the present arbitrary style-transformation procedures to preserve the structure of content-image while blending with the design and pattern of style-image. This paper presents the implementation of a network using SANET models for generating impressive artworks. It is flexible in the fusion of new style characteristics while sustaining the semantic-structure of the content-image. The identity-loss function helps to minimize the overall loss and conserves the spatial-arrangement of content. The results demonstrate that this method is practically efficient, and therefore it can be employed for real-time fusion and transformation using arbitrary styles.
Jin, N..  2020.  CNN-Based Image Style Transfer and Its Applications. 2020 International Conference on Computing and Data Science (CDS). :387–390.
Convolutional neural network is a variant of deep neural network. It is widely used to extract image features and used in image classification, image generation and style transfer. In the style transfer task, we can extract the content from one picture through the convolutional neural network, extract the style from another picture, and generate a new picture by the combination. In this paper, we show the general steps of image style transfer based on convolutional neural networks through a specific example, and discuss the future possible applications.
Wu, L., Chen, X., Meng, L., Meng, X..  2020.  Multitask Adversarial Learning for Chinese Font Style Transfer. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Style transfer between Chinese fonts is challenging due to both the complexity of Chinese characters and the significant difference between fonts. Existing algorithms for this task typically learn a mapping between the reference and target fonts for each character. Subsequently, this mapping is used to generate the characters that do not exist in the target font. However, the characters available for training are unlikely to cover all fine-grained parts of the missing characters, leading to the overfitting problem. As a result, the generated characters of the target font may suffer problems of incomplete or even radicals and dirty dots. To address this problem, this paper presents a multi-task adversarial learning approach, termed MTfontGAN, to generate more vivid Chinese characters. MTfontGAN learns to transfer a reference font to multiple target ones simultaneously. An alignment is imposed on the encoders of different tasks to make them focus on the important parts of the characters in general style transfer. Such cross-task interactions at the feature level effectively improve the generalization capability of MTfontGAN. The performance of MTfontGAN is evaluated on three Chinese font datasets. Experimental results show that MTfontGAN outperforms the state-of-the-art algorithms in a single-task setting. More importantly, increasing the number of tasks leads to better performance in all of them.
Wang, H., Li, Y., Wang, Y., Hu, H., Yang, M.-H..  2020.  Collaborative Distillation for Ultra-Resolution Universal Style Transfer. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :1857–1866.
Universal style transfer methods typically leverage rich representations from deep Convolutional Neural Network (CNN) models (e.g., VGG-19) pre-trained on large collections of images. Despite the effectiveness, its application is heavily constrained by the large model size to handle ultra-resolution images given limited memory. In this work, we present a new knowledge distillation method (named Collaborative Distillation) for encoder-decoder based neural style transfer to reduce the convolutional filters. The main idea is underpinned by a finding that the encoder-decoder pairs construct an exclusive collaborative relationship, which is regarded as a new kind of knowledge for style transfer models. Moreover, to overcome the feature size mismatch when applying collaborative distillation, a linear embedding loss is introduced to drive the student network to learn a linear embedding of the teacher's features. Extensive experiments show the effectiveness of our method when applied to different universal style transfer approaches (WCT and AdaIN), even if the model size is reduced by 15.5 times. Especially, on WCT with the compressed models, we achieve ultra-resolution (over 40 megapixels) universal style transfer on a 12GB GPU for the first time. Further experiments on optimization-based stylization scheme show the generality of our algorithm on different stylization paradigms. Our code and trained models are available at https://github.com/mingsun-tse/collaborative-distillation.
Jiang, H., Du, M., Whiteside, D., Moursy, O., Yang, Y..  2020.  An Approach to Embedding a Style Transfer Model into a Mobile APP. 2020 International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). :307–316.
The prevalence of photo processing apps suggests the demands of picture editing. As an implementation of the convolutional neural network, style transfer has been deep investigated and there are supported materials to realize it on PC platform. However, few approaches are mentioned to deploy a style transfer model on the mobile and meet the requirements of mobile users. The traditional style transfer model takes hours to proceed, therefore, based on a Perceptual Losses algorithm [1], we created a feedforward neural network for each style and the proceeding time was reduced to a few seconds. The training data were generated from a pre-trained convolutional neural network model, VGG-19. The algorithm took thousandth time and generated similar output as the original. Furthermore, we optimized the model and deployed the model with TensorFlow Mobile library. We froze the model and adopted a bitmap to scale the inputs to 720×720 and reverted back to the original resolution. The reverting process may create some blur but it can be regarded as a feature of art. The generated images have reliable quality and the waiting time is independent of the content and pattern of input images. The main factor that influences the proceeding time is the input resolution. The average waiting time of our model on the mobile phone, HUAWEI P20 Pro, is less than 2 seconds for 720p images and around 2.8 seconds for 1080p images, which are ten times slower than that on the PC GPU, Tesla T40. The performance difference depends on the architecture of the model.
Ye, H., Liu, W., Huang, S..  2020.  Method of Image Style Transfer Based on Edge Detection. 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). 1:1635–1639.
In order to overcome the problem of edge information loss in the process of neural network processing, a method of neural network style transfer based on edge detection is presented. The edge information of the content image is extracted, and the edge information image is processed in the neural network together with the content image and the style image to constrain the edge information of the content image. Compared with Gatys algorithm and markov random field neural network algorithm, the content image edge structure after image style transfer is successfully retained.
Mangaokar, N., Pu, J., Bhattacharya, P., Reddy, C. K., Viswanath, B..  2020.  Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :139–157.
Advances in deep neural networks (DNNs) have shown tremendous promise in the medical domain. However, the deep learning tools that are helping the domain, can also be used against it. Given the prevalence of fraud in the healthcare domain, it is important to consider the adversarial use of DNNs in manipulating sensitive data that is crucial to patient healthcare. In this work, we present the design and implementation of a DNN-based image translation attack on biomedical imagery. More specifically, we propose Jekyll, a neural style transfer framework that takes as input a biomedical image of a patient and translates it to a new image that indicates an attacker-chosen disease condition. The potential for fraudulent claims based on such generated `fake' medical images is significant, and we demonstrate successful attacks on both X-rays and retinal fundus image modalities. We show that these attacks manage to mislead both medical professionals and algorithmic detection schemes. Lastly, we also investigate defensive measures based on machine learning to detect images generated by Jekyll.
Bai, Y., Guo, Y., Wei, J., Lu, L., Wang, R., Wang, Y..  2020.  Fake Generated Painting Detection Via Frequency Analysis. 2020 IEEE International Conference on Image Processing (ICIP). :1256–1260.
With the development of deep neural networks, digital fake paintings can be generated by various style transfer algorithms. To detect the fake generated paintings, we analyze the fake generated and real paintings in Fourier frequency domain and observe statistical differences and artifacts. Based on our observations, we propose Fake Generated Painting Detection via Frequency Analysis (FGPD-FA) by extracting three types of features in frequency domain. Besides, we also propose a digital fake painting detection database for assessing the proposed method. Experimental results demonstrate the excellence of the proposed method in different testing conditions.
Jin, H., Wang, T., Zhang, M., Li, M., Wang, Y., Snoussi, H..  2020.  Neural Style Transfer for Picture with Gradient Gram Matrix Description. 2020 39th Chinese Control Conference (CCC). :7026–7030.
Despite the high performance of neural style transfer on stylized pictures, we found that Gatys et al [1] algorithm cannot perfectly reconstruct texture style. Output stylized picture could emerge unsatisfied unexpected textures such like muddiness in local area and insufficient grain expression. Our method bases on original algorithm, adding the Gradient Gram description on style loss, aiming to strengthen texture expression and eliminate muddiness. To some extent our method lengthens the runtime, however, its output stylized pictures get higher performance on texture details, especially in the elimination of muddiness.
2020-12-11
Zhou, Z., Yang, Y., Cai, Z., Yang, Y., Lin, L..  2019.  Combined Layer GAN for Image Style Transfer*. 2019 IEEE International Conference on Computational Electromagnetics (ICCEM). :1—3.

Image style transfer is an increasingly interesting topic in computer vision where the goal is to map images from one style to another. In this paper, we propose a new framework called Combined Layer GAN as a solution of dealing with image style transfer problem. Specifically, the edge-constraint and color-constraint are proposed and explored in the GAN based image translation method to improve the performance. The motivation of the work is that color and edge are fundamental vision factors for an image, while in the traditional deep network based approach, there is a lack of fine control of these factors in the process of translation and the performance is degraded consequently. Our experiments and evaluations show that our novel method with the edge and color constrains is more stable, and significantly improves the performance compared with the traditional methods.

Peng, M., Wu, Q..  2019.  Enhanced Style Transfer in Real-Time with Histogram-Matched Instance Normalization. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :2001—2006.

Since the neural networks are utilized to extract information from an image, Gatys et al. found that they could separate the content and style of images and reconstruct them to another image which called Style Transfer. Moreover, there are many feed-forward neural networks have been suggested to speeding up the original method to make Style Transfer become practical application. However, this takes a price: these feed-forward networks are unchangeable because of their fixed parameters which mean we cannot transfer arbitrary styles but only single one in real-time. Some coordinated approaches have been offered to relieve this dilemma. Such as a style-swap layer and an adaptive normalization layer (AdaIN) and soon. Its worth mentioning that we observed that the AdaIN layer only aligns the means and variance of the content feature maps with those of the style feature maps. Our method is aimed at presenting an operational approach that enables arbitrary style transfer in real-time, reserving more statistical information by histogram matching, providing more reliable texture clarity and more humane user control. We achieve performance more cheerful than existing approaches without adding calculation, complexity. And the speed comparable to the fastest Style Transfer method. Our method provides more flexible user control and trustworthy quality and stability.

Vasiliu, V., Sörös, G..  2019.  Coherent Rendering of Virtual Smile Previews with Fast Neural Style Transfer. 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). :66—73.

Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.

Mikołajczyk, A., Grochowski, M..  2019.  Style transfer-based image synthesis as an efficient regularization technique in deep learning. 2019 24th International Conference on Methods and Models in Automation and Robotics (MMAR). :42—47.

These days deep learning is the fastest-growing area in the field of Machine Learning. Convolutional Neural Networks are currently the main tool used for the image analysis and classification purposes. Although great achievements and perspectives, deep neural networks and accompanying learning algorithms have some relevant challenges to tackle. In this paper, we have focused on the most frequently mentioned problem in the field of machine learning, that is relatively poor generalization abilities. Partial remedies for this are regularization techniques e.g. dropout, batch normalization, weight decay, transfer learning, early stopping and data augmentation. In this paper we have focused on data augmentation. We propose to use a method based on a neural style transfer, which allows to generate new unlabeled images of high perceptual quality that combine the content of a base image with the appearance of another one. In a proposed approach, the newly created images are described with pseudo-labels, and then used as a training dataset. Real, labeled images are divided into the validation and test set. We validated proposed method on a challenging skin lesion classification case study. Four representative neural architectures are examined. Obtained results show the strong potential of the proposed approach.

Palash, M. H., Das, P. P., Haque, S..  2019.  Sentimental Style Transfer in Text with Multigenerative Variational Auto-Encoder. 2019 International Conference on Bangla Speech and Language Processing (ICBSLP). :1—4.

Style transfer is an emerging trend in the fields of deep learning's applications, especially in images and audio data this is proven very useful and sometimes the results are astonishing. Gradually styles of textual data are also being changed in many novel works. This paper focuses on the transfer of the sentimental vibe of a sentence. Given a positive clause, the negative version of that clause or sentence is generated keeping the context same. The opposite is also done with negative sentences. Previously this was a very tough job because the go-to techniques for such tasks such as Recurrent Neural Networks (RNNs) [1] and Long Short-Term Memories(LSTMs) [2] can't perform well with it. But since newer technologies like Generative Adversarial Network(GAN) and Variational AutoEncoder(VAE) are emerging, this work seem to become more and more possible and effective. In this paper, Multi-Genarative Variational Auto-Encoder is employed to transfer sentiment values. Inspite of working with a small dataset, this model proves to be promising.