Biblio
Filters: Keyword is face tracking [Clear All Filters]
Real-time Face Tracking in Surveillance Videos on Chips for Valuable Face Capturing. 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE). :281–284.
.
2020. Face capturing is a task to capture and store the "best" face of each person passing by the monitor. To some extent, it is similar to face tracking, but uses a different criterion and requires a valuable (i.e., high-quality and recognizable) face selection procedure. Face capturing systems play a critical role in public security. When deployed on edge devices, it is capable of reducing redundant storage in data center and speeding up retrieval of a certain person. However, high computation complexity and high repetition rate caused by ID switch errors are major challenges. In this paper, we propose a novel solution to constructing a real-time low-repetition face capturing system on chips. First, we propose a two-stage association algorithm for memory-efficient and accurate face tracking. Second, we propose a fast and reliable face quality estimation algorithm for valuable face selection. Our pipeline runs at over 20fps on Hisiv 3559A SoC with a single NNIE device for neural network inference, while achieving over 95% recall and less than 0.4 repetition rate in real world surveillance videos.
Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). :83—92.
.
2019. High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processing pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.