Biblio
In this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.
In the field of image processing, it is more complex and challenging task to detect the Human motion in the video and recognize their actions from the video sequences. A novel approach is presented in this paper to detect the human motion and recognize their actions. By tracking the selected object over consecutive frames of a video or image sequences, the different Human actions are recognized. Initially, the background motion is subtracted from the input video stream and its binary images are constructed. Using spatiotemporal interest points, the object which needs to be monitored is selected by enclosing the required pixels within the bounding rectangle. The selected foreground pixels within the bounding rectangle are then tracked using edge tracking algorithm. The features are extracted and using these features human motion are detected. Finally, the different human actions are recognized using K-Nearest Neighbor classifier. The applications which uses this methodology where monitoring the human actions is required such as shop surveillance, city surveillance, airports surveillance and other important places where security is the prime factor. The results obtained are quite significant and are analyzed on the datasets like KTH and Weizmann dataset, which contains actions like bending, running, walking, skipping, and hand-waving.
The technology of vehicle video detecting and tracking has been playing an important role in the ITS (Intelligent Transportation Systems) field during recent years. The occlusion phenomenon among vehicles is one of the most difficult problems related to vehicle tracking. In order to handle occlusion, this paper proposes an effective solution that applied Markov Random Field (MRF) to the traffic images. The contour of the vehicle is firstly detected by using background subtraction, then numbers of blocks with vehicle's texture and motion information are filled inside each vehicle. We extract several kinds of information of each block to process the following tracking. As for each occlusive block two groups of clique functions in MRF model are defined, which represents spatial correlation and motion coherence respectively. By calculating each occlusive block's total energy function, we finally solve the attribution problem of occlusive blocks. The experimental results show that our method can handle occlusion problems effectively and track each vehicle continuously.