Visible to the public Biblio

Filters: Keyword is visual tracking  [Clear All Filters]
2021-07-07
Beghdadi, Azeddine, Bezzine, Ismail, Qureshi, Muhammad Ali.  2020.  A Perceptual Quality-driven Video Surveillance System. 2020 IEEE 23rd International Multitopic Conference (INMIC). :1–6.
Video-based surveillance systems often suffer from poor-quality video in an uncontrolled environment. This may strongly affect the performance of high-level tasks such as visual tracking, abnormal event detection or more generally scene understanding and interpretation. This work aims to demonstrate the impact and the importance of video quality in video surveillance systems. Here, we focus on the most important challenges and difficulties related to the perceptual quality of the acquired or transmitted images/videos in uncontrolled environments. In this paper, we propose an architecture of a smart surveillance system that incorporates the perceptual quality of acquired scenes. We study the behaviour of some state-of-the-art video quality metrics on some original and distorted sequences from a dedicated surveillance dataset. Through this study, it has been shown that some of the state-of-the-art image/video quality metrics do not work in the context of video-surveillance. This study opens a new research direction to develop the video quality metrics in the context of video surveillance and also to propose a new quality-driven framework of video surveillance system.
2018-11-19
Yang, Lingxiao, Liu, Risheng, Zhang, David, Zhang, Lei.  2017.  Deep Location-Specific Tracking. Proceedings of the 25th ACM International Conference on Multimedia. :1309–1317.

Convolutional Neural Network (CNN) based methods have shown significant performance gains in the problem of visual tracking in recent years. Due to many uncertain changes of objects online, such as abrupt motion, background clutter and large deformation, the visual tracking is still a challenging task. We propose a novel algorithm, namely Deep Location-Specific Tracking, which decomposes the tracking problem into a localization task and a classification task, and trains an individual network for each task. The localization network exploits the information in the current frame and provides a specific location to improve the probability of successful tracking, while the classification network finds the target among many examples generated around the target location in the previous frame, as well as the one estimated from the localization network in the current frame. CNN based trackers often have massive number of trainable parameters, and are prone to over-fitting to some particular object states, leading to less precision or tracking drift. We address this problem by learning a classification network based on 1 × 1 convolution and global average pooling. Extensive experimental results on popular benchmark datasets show that the proposed tracker achieves competitive results without using additional tracking videos for fine-tuning. The code is available at https://github.com/ZjjConan/DLST

2017-02-21
W. Huang, J. Gu, X. Ma.  2015.  "Visual tracking based on compressive sensing and particle filter". 2015 IEEE 28th Canadian Conference on Electrical and Computer Engineering (CCECE). :1435-1440.

A robust appearance model is usually required in visual tracking, which can handle pose variation, illumination variation, occlusion and many other interferences occurring in video. So far, a number of tracking algorithms make use of image samples in previous frames to update appearance models. There are many limitations of that approach: 1) At the beginning of tracking, there exists no sufficient amount of data for online update because these adaptive models are data-dependent and 2) in many challenging situations, robustly updating the appearance models is difficult, which often results in drift problems. In this paper, we proposed a tracking algorithm based on compressive sensing theory and particle filter framework. Features are extracted by random projection with data-independent basis. Particle filter is employed to make a more accurate estimation of the target location and make much of the updated classifier. The robustness and the effectiveness of our tracker have been demonstrated in several experiments.