Visible to the public Biblio

Filters: Author is Ta, Thi Nhung  [Clear All Filters]
2022-04-25
Nguyen, Huy Hoang, Ta, Thi Nhung, Nguyen, Ngoc Cuong, Bui, Van Truong, Pham, Hung Manh, Nguyen, Duc Minh.  2021.  YOLO Based Real-Time Human Detection for Smart Video Surveillance at the Edge. 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE). :439–444.
Recently, smart video surveillance at the edge has become a trend in developing security applications since edge computing enables more image processing tasks to be implemented on the decentralised network note of the surveillance system. As a result, many security applications such as behaviour recognition and prediction, employee safety, perimeter intrusion detection and vandalism deterrence can minimise their latency or even process in real-time when the camera network system is extended to a larger degree. Technically, human detection is a key step in the implementation of these applications. With the advantage of high detection rates, deep learning methods have been widely employed on edge devices in order to detect human objects. However, due to their high computation costs, it is challenging to apply these methods on resource limited edge devices for real-time applications. Inspired by the You Only Look Once (YOLO), residual learning and Spatial Pyramid Pooling (SPP), a novel form of real-time human detection is presented in this paper. Our approach focuses on designing a network structure so that the developed model can achieve a good trade-off between accuracy and processing time. Experimental results show that our trained model can process 2 FPS on Raspberry PI 3B and detect humans with accuracies of 95.05 % and 96.81 % when tested respectively on INRIA and PENN FUDAN datasets. On the human COCO test dataset, our trained model outperforms the performance of the Tiny-YOLO versions. Additionally, compare to the SSD based L-CNN method, our algorithm achieves better accuracy than the other method.