Visible to the public Moving Objects Segmentation in Infrared Scene Videos

TitleMoving Objects Segmentation in Infrared Scene Videos
Publication TypeConference Paper
Year of Publication2021
AuthorsEl Rai, Marwa, Al-Saad, Mina, Darweesh, Muna, Al Mansoori, Saeed, Al Ahmad, Hussain, Mansoor, Wathiq
Conference Name2021 4th International Conference on Signal Processing and Information Security (ICSPIS)
Date Publishednov
Keywordsbackground subtraction, Deep Learning, deep video, Infrared Video Sequences, Metrics, Moving Object Segmentation, Object segmentation, pubcrawl, resilience, Resiliency, Scalability, Signal processing algorithms, Streaming media, Thermal sensors, Training, visualization
AbstractNowadays, developing an intelligent system for segmenting the moving object from the background is essential task for video surveillance applications. Recently, a deep learning segmentation algorithm composed of encoder CNN, a Feature Pooling Module and a decoder CNN called FgSegNET\_S has been proposed. It is capable to train the model using few training examples. FgSegNET\_S is relying only on the spatial information while it is fundamental to include temporal information to distinguish if an object is moving or not. In this paper, an improved version known as (T\_FgSegNET\_S) is proposed by using the subtracted images from the initial background as input. The proposed approach is trained and evaluated using two publicly available infrared datasets: remote scene infrared videos captured by medium-wave infrared (MWIR) sensors and the Grayscale Thermal Foreground Detection (GTFD) dataset. The performance of network is evaluated using precision, recall, and F-measure metrics. The experiments show improved results, especially when compared to other state-of-the-art methods.
DOI10.1109/ICSPIS53734.2021.9652436
Citation Keyel_rai_moving_2021