Visible to the public An improved style transfer approach for videos

TitleAn improved style transfer approach for videos
Publication TypeConference Paper
Year of Publication2018
AuthorsChang, R., Chang, C., Way, D., Shih, Z.
Conference Name2018 International Workshop on Advanced Image Technology (IWAIT)
Keywordsbackground object segmentation, Coherence, discontinuous segmentation, feedforward neural nets, foreground object segmentation, fully convolutional neural network, image segmentation, image sequences, motion estimation, motion estimation methods, Motion segmentation, Neural Network, neural style transfer, occlusion, optical flow, Predictive Metrics, pubcrawl, Resiliency, Scalability, semantic segmentation, Semantics, shape deformation, style transfer, style transfer approach, video signal processing, Videos
Abstract

In this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.

DOI10.1109/IWAIT.2018.8369741
Citation Keychang_improved_2018