Biblio
In this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.
This paper investigates several techniques that increase the accuracy of motion boundaries in estimated motion fields of a local dense estimation scheme. In particular, we examine two matching metrics, one is MSE in the image domain and the other one is a recently proposed multiresolution metric that has been shown to produce more accurate motion boundaries. We also examine several different edge-preserving filters. The edge-aware moving average filter, proposed in this paper, takes an input image and the result of an edge detection algorithm, and outputs an image that is smooth except at the detected edges. Compared to the adoption of edge-preserving filters, we find that matching metrics play a more important role in estimating accurate and compressible motion fields. Nevertheless, the proposed filter may provide further improvements in the accuracy of the motion boundaries. These findings can be very useful for a number of recently proposed scalable interactive video coding schemes.