Research on Neural Networks Integration for Object Classification in Video Analysis Systems
Title | Research on Neural Networks Integration for Object Classification in Video Analysis Systems |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Fomin, I., Burin, V., Bakhshiev, A. |
Conference Name | 2020 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM) |
Date Published | May 2020 |
Publisher | IEEE |
ISBN Number | 978-1-7281-4590-7 |
Keywords | convolutional neural networks, Deep Neural Network, deep video, direct Python script execution techniques, false detections, image classification, image motion analysis, image sequence, image sequences, Keras, Keras developer-friendly environment, Keras integration, Metrics, moving objects detection, network architectures, network training, neural nets, neural networks integration, object classification, object detection, Object recognition, outdoor video surveillance cameras, pubcrawl, resilience, Resiliency, Scalability, TensorFlow, video analysis, video analysis system, video cameras, video signal processing, video surveillance, video surveillance system |
Abstract | Object recognition with the help of outdoor video surveillance cameras is an important task in the context of ensuring the security at enterprises, public places and even private premises. There have long existed systems that allow detecting moving objects in the image sequence from a video surveillance system. Such a system is partially considered in this research. It detects moving objects using a background model, which has certain problems. Due to this some objects are missed or detected falsely. We propose to combine the moving objects detection results with the classification, using a deep neural network. This will allow determining whether a detected object belongs to a certain class, sorting out false detections, discarding the unnecessary ones (sometimes individual classes are unwanted), to divide detected people into the employees in the uniform and all others, etc. The authors perform a network training in the Keras developer-friendly environment that provides for quick building, changing and training of network architectures. The performance of the Keras integration into a video analysis system, using direct Python script execution techniques, is between 6 and 52 ms, while the precision is between 59.1% and 97.2% for different architectures. The integration, made by freezing a selected network architecture with weights, is selected after testing. After that, frozen architecture can be imported into video analysis using the TensorFlow interface for C++. The performance of such type of integration is between 3 and 49 ms. The precision is between 63.4% and 97.8% for different architectures. |
URL | https://ieeexplore.ieee.org/document/9112011 |
DOI | 10.1109/ICIEAM48468.2020.9112011 |
Citation Key | fomin_research_2020 |
- Scalability
- neural networks integration
- object classification
- object detection
- Object recognition
- outdoor video surveillance cameras
- pubcrawl
- resilience
- Resiliency
- neural nets
- TensorFlow
- video analysis
- video analysis system
- video cameras
- video signal processing
- video surveillance
- video surveillance system
- convolutional neural networks
- network training
- network architectures
- moving objects detection
- Metrics
- Keras integration
- Keras developer-friendly environment
- Keras
- image sequences
- image sequence
- image motion analysis
- image classification
- false detections
- direct Python script execution techniques
- deep video
- Deep Neural Network