Visible to the public Biblio

Filters: Keyword is Motion segmentation  [Clear All Filters]
2020-12-07
Chang, R., Chang, C., Way, D., Shih, Z..  2018.  An improved style transfer approach for videos. 2018 International Workshop on Advanced Image Technology (IWAIT). :1–2.

In this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.

2018-03-19
Hu, Xiaoyan, Xie, Shunbo.  2017.  Efficient and Robust Motion Segmentation via Adaptive Similarity Metric. Proceedings of the Computer Graphics International Conference. :34:1–34:6.

This paper introduces an efficient and robust method that segments long motion capture data into distinct behaviors. The method is unsupervised, and is fully automatic. We first apply spectral clustering on motion affinity matrix to get a rough segmentation. We combined two statistical filters to remove the noises and get a good initial guess on the cut points as well as on the number of segments. Then, we analyzed joint usage information within each rough segment and recomputed an adaptive affinity matrix for the motion. Applying spectral clustering again on this adaptive affinity matrix produced a robust and accurate segmentation compared with the ground-truth. The experiments showed that the proposed approach outperformed the available methods on the CMU Mocap database.

2015-05-01
Gorur, P., Amrutur, B..  2014.  Skip Decision and Reference Frame Selection for Low-Complexity H.264/AVC Surveillance Video Coding. Circuits and Systems for Video Technology, IEEE Transactions on. 24:1156-1169.

H.264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.