Visible to the public Biblio

Filters: Keyword is video signal processing  [Clear All Filters]
2020-07-03
Feng, Ri-Chen, Lin, Daw-Tung, Chen, Ken-Min, Lin, Yi-Yao, Liu, Chin-De.  2019.  Improving Deep Learning by Incorporating Semi-automatic Moving Object Annotation and Filtering for Vision-based Vehicle Detection*. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2484—2489.

Deep learning has undergone tremendous advancements in computer vision studies. The training of deep learning neural networks depends on a considerable amount of ground truth datasets. However, labeling ground truth data is a labor-intensive task, particularly for large-volume video analytics applications such as video surveillance and vehicles detection for autonomous driving. This paper presents a rapid and accurate method for associative searching in big image data obtained from security monitoring systems. We developed a semi-automatic moving object annotation method for improving deep learning models. The proposed method comprises three stages, namely automatic foreground object extraction, object annotation in subsequent video frames, and dataset construction using human-in-the-loop quick selection. Furthermore, the proposed method expedites dataset collection and ground truth annotation processes. In contrast to data augmentation and data generative models, the proposed method produces a large amount of real data, which may facilitate training results and avoid adverse effects engendered by artifactual data. We applied the constructed annotation dataset to train a deep learning you-only-look-once (YOLO) model to perform vehicle detection on street intersection surveillance videos. Experimental results demonstrated that the accurate detection performance was improved from a mean average precision (mAP) of 83.99 to 88.03.

Dinama, Dima Maharika, A’yun, Qurrota, Syahroni, Achmad Dahlan, Adji Sulistijono, Indra, Risnumawan, Anhar.  2019.  Human Detection and Tracking on Surveillance Video Footage Using Convolutional Neural Networks. 2019 International Electronics Symposium (IES). :534—538.

Safety is one of basic human needs so we need a security system that able to prevent crime happens. Commonly, we use surveillance video to watch environment and human behaviour in a location. However, the surveillance video can only used to record images or videos with no additional information. Therefore we need more advanced camera to get another additional information such as human position and movement. This research were able to extract those information from surveillance video footage by using human detection and tracking algorithm. The human detection framework is based on Deep Learning Convolutional Neural Networks which is a very popular branch of artificial intelligence. For tracking algorithms, channel and spatial correlation filter is used to track detected human. This system will generate and export tracked movement on footage as an additional information. This tracked movement can be analysed furthermore for another research on surveillance video problems.

Suo, Yucong, Zhang, Chen, Xi, Xiaoyun, Wang, Xinyi, Zou, Zhiqiang.  2019.  Video Data Hierarchical Retrieval via Deep Hash Method. 2019 IEEE 11th International Conference on Communication Software and Networks (ICCSN). :709—714.

Video retrieval technology faces a series of challenges with the tremendous growth in the number of videos. In order to improve the retrieval performance in efficiency and accuracy, a novel deep hash method for video data hierarchical retrieval is proposed in this paper. The approach first uses cluster-based method to extract key frames, which reduces the workload of subsequent work. On the basis of this, high-level semantical features are extracted from VGG16, a widely used deep convolutional neural network (deep CNN) model. Then we utilize a hierarchical retrieval strategy to improve the retrieval performance, roughly can be categorized as coarse search and fine search. In coarse search, we modify simHash to learn hash codes for faster speed, and in fine search, we use the Euclidean distance to achieve higher accuracy. Finally, we compare our approach with other two methods through practical experiments on two videos, and the results demonstrate that our approach has better retrieval effect.

Adari, Suman Kalyan, Garcia, Washington, Butler, Kevin.  2019.  Adversarial Video Captioning. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :24—27.
In recent years, developments in the field of computer vision have allowed deep learning-based techniques to surpass human-level performance. However, these advances have also culminated in the advent of adversarial machine learning techniques, capable of launching targeted image captioning attacks that easily fool deep learning models. Although attacks in the image domain are well studied, little work has been done in the video domain. In this paper, we show it is possible to extend prior attacks in the image domain to the video captioning task, without heavily affecting the video's playback quality. We demonstrate our attack against a state-of-the-art video captioning model, by extending a prior image captioning attack known as Show and Fool. To the best of our knowledge, this is the first successful method for targeted attacks against a video captioning model, which is able to inject 'subliminal' perturbations into the video stream, and force the model to output a chosen caption with up to 0.981 cosine similarity, achieving near-perfect similarity to chosen target captions.
2020-04-13
Wu, Qiong, Zhang, Haitao, Du, Peilun, Li, Ye, Guo, Jianli, He, Chenze.  2019.  Enabling Adaptive Deep Neural Networks for Video Surveillance in Distributed Edge Clouds. 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS). :525–528.
In the field of video surveillance, the demands of intelligent video analysis services based on Deep Neural Networks (DNNs) have grown rapidly. Although most existing studies focus on the performance of DNNs pre-deployed at remote clouds, the network delay caused by computation offloading from network cameras to remote clouds is usually long and sometimes unbearable. Edge computing can enable rich services and applications in close proximity to the network cameras. However, owing to the limited computing resources of distributed edge clouds, it is challenging to satisfy low latency and high accuracy requirements for all users, especially when the number of users surges. To address this challenge, we first formulate the intelligent video surveillance task scheduling problem that minimizes the average response time while meeting the performance requirements of tasks and prove that it is NP-hard. Second, we present an adaptive DNN model selection method to identify the most effective DNN model for each task by comparing the feature similarity between the input video segment and pre-stored training videos. Third, we propose a two-stage delay-aware graph searching approach that presents a beneficial trade-off between network delay and computing delay. Experimental results demonstrate the efficiency of our approach.
Sanchez, Cristian, Martinez-Mosquera, Diana, Navarrete, Rosa.  2019.  Matlab Simulation of Algorithms for Face Detection in Video Surveillance. 2019 International Conference on Information Systems and Software Technologies (ICI2ST). :40–47.
Face detection is an application widely used in video surveillance systems and it is the first step for subsequent applications such as monitoring and recognition. For facial detection, there are a series of algorithms that allow the face to be extracted in a video image, among which are the Viola & Jones waterfall method and the method by geometric models using the Hausdorff distance. In this article, both algorithms are theoretically analyzed and the best one is determined by efficiency and resource optimization. Considering the most common problems in the detection of faces in a video surveillance system, such as the conditions of brightness and the angle of rotation of the face, tests have been carried out in 13 different scenarios with the best theoretically analyzed algorithm and its combination with another algorithm The images obtained, using a digital camera in the 13 scenarios, have been analyzed using Matlab code of the Viola & Jones and Viola & Jones algorithm combined with the Kanade-Lucas-Tomasi algorithm to add the feature of completing the tracking of a single object. This paper presents the detection percentages, false positives and false negatives for each image and for each simulation code, resulting in the scenarios with the most detection problems and the most accurate algorithm in face detection.
M.R., Anala, Makker, Malika, Ashok, Aakanksha.  2019.  Anomaly Detection in Surveillance Videos. 2019 26th International Conference on High Performance Computing, Data and Analytics Workshop (HiPCW). :93–98.
Every public or private area today is preferred to be under surveillance to ensure high levels of security. Since the surveillance happens round the clock, data gathered as a result is huge and requires a lot of manual work to go through every second of the recorded videos. This paper presents a system which can detect anomalous behaviors and alarm the user on the type of anomalous behavior. Since there are a myriad of anomalies, the classification of anomalies had to be narrowed down. There are certain anomalies which are generally seen and have a huge impact on public safety, such as explosions, road accidents, assault, shooting, etc. To narrow down the variations, this system can detect explosion, road accidents, shooting, and fighting and even output the frame of their occurrence. The model has been trained with videos belonging to these classes. The dataset used is UCF Crime dataset. Learning patterns from videos requires the learning of both spatial and temporal features. Convolutional Neural Networks (CNN) extract spatial features and Long Short-Term Memory (LSTM) networks learn the sequences. The classification, using an CNN-LSTM model achieves an accuracy of 85%.
liu, Shidong, Bu, Xiande.  2019.  Performance Modeling and Assessment of Unified Video Surveillance System Based on Ubiquitous SG-eIoT. 2019 IEEE International Conference on Energy Internet (ICEI). :238–243.
Video surveillance system is an important application system on the ubiquitous SG-eIoT. A comparative analysis of the traditional video surveillance scheme and the unified video surveillance solution in the eIoT environment is made. Network load and service latency parameters under the two schemes are theoretically modeled and simulated. Combined with the simulation results, the corresponding suggestions for the access of video terminals in the ubiquitous eIoT are given.
Wang, Yongtao.  2019.  Development of AtoN Real-time Video Surveillance System Based on the AIS Collision Warning. 2019 5th International Conference on Transportation Information and Safety (ICTIS). :393–398.
In view of the challenges with Aids to Navigation (AtoN) managements and emergency response, the present study designs and presents an AtoN real-time video surveillance system based on the AIS collision warning. The key technologies regarding with AtoN cradle head control and testing algorithms, video image fusion, system operation and implementation are demonstrated in details. Case study is performed at Guan River (China) to verify the effectiveness of the AtoN real-time video surveillance system for maritime security supervision. The research results indicate that the intellective level of the AtoN maintenance and managements could be significantly improved. The idea of designing modules brings a good flexibility and a high portability for the present surveillance system, therefore provides a guidance for the design of similar maritime surveillance systems.
Kim, Dongchil, Kim, Kyoungman, Park, Sungjoo.  2019.  Automatic PTZ Camera Control Based on Deep-Q Network in Video Surveillance System. 2019 International Conference on Electronics, Information, and Communication (ICEIC). :1–3.
Recently, Pan/Tilt/Zoom (PTZ) camera has been widely used in video surveillance systems. However, it is difficult to automatically control PTZ cameras according to moving objects in the surveillance area. This paper proposes an automatic camera control method based on a Deep-Q Network (DQN) for improving the recognition accuracy of anomaly actions in the video surveillance system. To generate PTZ camera control values, the proposed method uses the position and size information of the object which received from the video analysis system. Through implementation results, the proposed method can automatically control the PTZ camera according to moving objects.
2020-03-30
Bharati, Aparna, Moreira, Daniel, Brogan, Joel, Hale, Patricia, Bowyer, Kevin, Flynn, Patrick, Rocha, Anderson, Scheirer, Walter.  2019.  Beyond Pixels: Image Provenance Analysis Leveraging Metadata. 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). :1692–1702.
Creative works, whether paintings or memes, follow unique journeys that result in their final form. Understanding these journeys, a process known as "provenance analysis," provides rich insights into the use, motivation, and authenticity underlying any given work. The application of this type of study to the expanse of unregulated content on the Internet is what we consider in this paper. Provenance analysis provides a snapshot of the chronology and validity of content as it is uploaded, re-uploaded, and modified over time. Although still in its infancy, automated provenance analysis for online multimedia is already being applied to different types of content. Most current works seek to build provenance graphs based on the shared content between images or videos. This can be a computationally expensive task, especially when considering the vast influx of content that the Internet sees every day. Utilizing non-content-based information, such as timestamps, geotags, and camera IDs can help provide important insights into the path a particular image or video has traveled during its time on the Internet without large computational overhead. This paper tests the scope and applicability of metadata-based inferences for provenance graph construction in two different scenarios: digital image forensics and cultural analytics.
2020-03-09
Zhai, Liming, Wang, Lina, Ren, Yanzhen.  2019.  Multi-domain Embedding Strategies for Video Steganography by Combining Partition Modes and Motion Vectors. 2019 IEEE International Conference on Multimedia and Expo (ICME). :1402–1407.
Digital video has various types of entities, which are utilized as embedding domains to hide messages in steganography. However, nearly all video steganography uses only one type of embedding domain, resulting in limited embedding capacity and potential security risks. In this paper, we firstly propose to embed in multi-domains for video steganography by combining partition modes (PMs) and motion vectors (MVs). The multi-domain embedding (MDE) aims to spread the modifications to different embedding domains for achieving higher undetectability. The key issue of MDE is the interactions of entities across domains. To this end, we design two MDE strategies, which hide data in PM domain and MV domain by sequential embedding and simultaneous embedding respectively. These two strategies can be applied to existing steganography within a distortion-minimization framework. Experiments show that the MDE strategies achieve a significant improvement in security performance against targeted steganalysis and fusion based steganalysis.
2020-02-10
Zhang, Junjie, Sun, Tianfu.  2019.  Multi-core Heterogeneous Video Processing System Design. 2019 IEEE 13th International Conference on Anti-counterfeiting, Security, and Identification (ASID). :178–182.
In order to accelerate the image processing speed, in this paper, a multi-core heterogeneous computing technology based on the Xilinx Zynq platform is proposed. The proposed technique could accelerate the real-time video image processing system through hardware acceleration. In order to verify the proposed technique, an Otsu binarized hardware-accelerated IP is designed in FPGA and interacts with ARM through the AXI bus. Compared with the existing homogeneous architecture processor computing, the image processing speed of the proposed technique with multi-core heterogeneous acceleration processing is significantly accelerated.
Velmurugan, K.Jayasakthi, Hemavathi, S..  2019.  Video Steganography by Neural Networks Using Hash Function. 2019 Fifth International Conference on Science Technology Engineering and Mathematics (ICONSTEM). 1:55–58.

Video Steganography is an extension of image steganography where any kind of file in any extension is hidden into a digital video. The video content is dynamic in nature and this makes the detection of hidden data difficult than other steganographic techniques. The main motive of using video steganography is that the videos can store large amount of data in it. This paper focuses on security using the combination of hybrid neural networks and hash function for determining the best bits in the cover video to embed the secret data. For the embedding process, the cover video and the data to be hidden is uploaded. Then the hash algorithm and neural networks are applied to form the stego video. For the extraction process, the reverse process is applied and the secret data is obtained. All experiments are done using MatLab2016a software.

Alia, Mohammad A., Maria, Khulood Abu, Alsarayreh, Maher A., Maria, Eman Abu, Almanasra, Sally.  2019.  An Improved Video Steganography: Using Random Key-Dependent. 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT). :234–237.

Steganography is defined as the art of hiding secret data in a non-secret digital carrier called cover media. Trading delicate data without assurance against intruders that may intrude on this data is a lethal. In this manner, transmitting delicate information and privileged insights must not rely on upon just the current communications channels insurance advancements. Likewise should make more strides towards information insurance. This article proposes an improved approach for video steganography. The improvement made by searching for exact matching between the secret text and the video frames RGB channels and Random Key -Dependent Data, achieving steganography performance criteria, invisibility, payload/capacity and robustness.

2020-01-07
Hussain, Syed Saiq, Sohail Ibrahim, Muhammad, Mir, Syed Zain, Yasin, Sajid, Majeed, Muhammad Kashif, Ghani, Azfar.  2018.  Efficient Video Encryption Using Lightweight Cryptography Algorithm. 2018 3rd International Conference on Emerging Trends in Engineering, Sciences and Technology (ICEEST). :1-6.

The natural redundancy in video data due to its spatio-temporal correlation of neighbouring pixels require highly complex encryption process to successfully cipher the data. Conventional encryption methods are based on lengthy keys and higher number of rounds which are inefficient for low powered, small battery operated devices. Motivated by the success of lightweight encryption methods specially designed for IoT environment, herein an efficient method for video encryption is proposed. The proposed technique is based on a recently proposed encryption algorithm named Secure IoT (SIT), which utilizes P and Q functions of the KHAZAD cipher to achieve high encryption at low computation cost. Extensive simulations are performed to evaluate the efficacy of the proposed method and results are compared with Secure Force (SF-64) cipher. Under all conditions the proposed method achieved significantly improved results.

2019-08-12
Liu, Y., Yang, Y., Shi, A., Jigang, P., Haowei, L..  2019.  Intelligent monitoring of indoor surveillance video based on deep learning. 2019 21st International Conference on Advanced Communication Technology (ICACT). :648–653.

With the rapid development of information technology, video surveillance system has become a key part in the security and protection system of modern cities. Especially in prisons, surveillance cameras could be found almost everywhere. However, with the continuous expansion of the surveillance network, surveillance cameras not only bring convenience, but also produce a massive amount of monitoring data, which poses huge challenges to storage, analytics and retrieval. The smart monitoring system equipped with intelligent video analytics technology can monitor as well as pre-alarm abnormal events or behaviours, which is a hot research direction in the field of surveillance. This paper combines deep learning methods, using the state-of-the-art framework for instance segmentation, called Mask R-CNN, to train the fine-tuning network on our datasets, which can efficiently detect objects in a video image while simultaneously generating a high-quality segmentation mask for each instance. The experiment show that our network is simple to train and easy to generalize to other datasets, and the mask average precision is nearly up to 98.5% on our own datasets.

2019-04-01
Usuzaki, S., Aburada, K., Yamaba, H., Katayama, T., Mukunoki, M., Park, M., Okazaki, N..  2018.  Interactive Video CAPTCHA for Better Resistance to Automated Attack. 2018 Eleventh International Conference on Mobile Computing and Ubiquitous Network (ICMU). :1–2.
A “Completely Automated Public Turing Test to Tell Computers and Humans Apart” (CAPTCHA) widely used online services so that prevents bots from automatic getting a large of accounts. Interactive video type CAPTCHAs that attempt to detect this attack by using delay time due to communication relays have been proposed. However, these approaches remain insufficiently resistant to bots. We propose a CAPTCHA that combines resistant to automated and relay attacks. In our CAPTCHA, the users recognize a moving object (target object) from among a number of randomly appearing decoy objects and tracks the target with mouse cursor. The users pass the test when they were able to track the target for a certain time. Since the target object moves quickly, the delay makes it difficult for a remote solver to break the CAPTCHA during a relay attack. It is also difficult for a bot to track the target using image processing because it has same looks of the decoys. We evaluated our CAPTCHA's resistance to relay and automated attacks. Our results show that, if our CAPTHCA's parameters are set suitable value, a relay attack cannot be established economically and false acceptance rate with bot could be reduced to 0.01% without affecting human success rate.
2018-11-19
Chen, D., Liao, J., Yuan, L., Yu, N., Hua, G..  2017.  Coherent Online Video Style Transfer. 2017 IEEE International Conference on Computer Vision (ICCV). :1114–1123.

Training a feed-forward network for the fast neural style transfer of images has proven successful, but the naive extension of processing videos frame by frame is prone to producing flickering results. We propose the first end-to-end network for online video style transfer, which generates temporally coherent stylized video sequences in near realtime. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures consistency over a longer period of time. Our network can incorporate different image stylization networks and clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitude faster.

Gupta, A., Johnson, J., Alahi, A., Fei-Fei, L..  2017.  Characterizing and Improving Stability in Neural Style Transfer. 2017 IEEE International Conference on Computer Vision (ICCV). :4087–4096.

Recent progress in style transfer on images has focused on improving the quality of stylized images and speed of methods. However, real-time methods are highly unstable resulting in visible flickering when applied to videos. In this work we characterize the instability of these methods by examining the solution set of the style transfer objective. We show that the trace of the Gram matrix representing style is inversely related to the stability of the method. Then, we present a recurrent convolutional network for real-time video style transfer which incorporates a temporal consistency loss and overcomes the instability of prior methods. Our networks can be applied at any resolution, do not require optical flow at test time, and produce high quality, temporally consistent stylized videos in real-time.

Huang, H., Wang, H., Luo, W., Ma, L., Jiang, W., Zhu, X., Li, Z., Liu, W..  2017.  Real-Time Neural Style Transfer for Videos. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). :7044–7052.

Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.

2018-04-04
Xie, D., Wang, Y..  2017.  High definition wide dynamic video surveillance system based on FPGA. 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :2403–2407.

A high definition(HD) wide dynamic video surveillance system is designed and implemented based on Field Programmable Gate Array(FPGA). This system is composed of three subsystems, which are video capture, video wide dynamic processing and video display subsystem. The images in the video are captured directly through the camera that is configured in a pattern have long exposure in odd frames and short exposure in even frames. The video data stream is buffered in DDR2 SDRAM to obtain two adjacent frames. Later, the image data fusion is completed by fusing the long exposure image with the short exposure image (pixel by pixel). The video image display subsystem can display the image through a HDMI interface. The system is designed on the platform of Lattice ECP3-70EA FPGA, and camera is the Panasonic MN34229 sensor. The experimental result shows that this system can expand dynamic range of the HD video with 30 frames per second and a resolution equal to 1920*1080 pixels by real-time wide dynamic range (WDR) video processing, and has a high practical value.

Nawaratne, R., Bandaragoda, T., Adikari, A., Alahakoon, D., Silva, D. De, Yu, X..  2017.  Incremental knowledge acquisition and self-learning for autonomous video surveillance. IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society. :4790–4795.

The world is witnessing a remarkable increase in the usage of video surveillance systems. Besides fulfilling an imperative security and safety purpose, it also contributes towards operations monitoring, hazard detection and facility management in industry/smart factory settings. Most existing surveillance techniques use hand-crafted features analyzed using standard machine learning pipelines for action recognition and event detection. A key shortcoming of such techniques is the inability to learn from unlabeled video streams. The entire video stream is unlabeled when the requirement is to detect irregular, unforeseen and abnormal behaviors, anomalies. Recent developments in intelligent high-level video analysis have been successful in identifying individual elements in a video frame. However, the detection of anomalies in an entire video feed requires incremental and unsupervised machine learning. This paper presents a novel approach that incorporates high-level video analysis outcomes with incremental knowledge acquisition and self-learning for autonomous video surveillance. The proposed approach is capable of detecting changes that occur over time and separating irregularities from re-occurrences, without the prerequisite of a labeled dataset. We demonstrate the proposed approach using a benchmark video dataset and the results confirm its validity and usability for autonomous video surveillance.

Jin, Y., Eriksson, J..  2017.  Fully Automatic, Real-Time Vehicle Tracking for Surveillance Video. 2017 14th Conference on Computer and Robot Vision (CRV). :147–154.

We present an object tracking framework which fuses multiple unstable video-based methods and supports automatic tracker initialization and termination. To evaluate our system, we collected a large dataset of hand-annotated 5-minute traffic surveillance videos, which we are releasing to the community. To the best of our knowledge, this is the first publicly available dataset of such long videos, providing a diverse range of real-world object variation, scale change, interaction, different resolutions and illumination conditions. In our comprehensive evaluation using this dataset, we show that our automatic object tracking system often outperforms state-of-the-art trackers, even when these are provided with proper manual initialization. We also demonstrate tracking throughput improvements of 5× or more vs. the competition.

Rupasinghe, R. A. A., Padmasiri, D. A., Senanayake, S. G. M. P., Godaliyadda, G. M. R. I., Ekanayake, M. P. B., Wijayakulasooriya, J. V..  2017.  Dynamic clustering for event detection and anomaly identification in video surveillance. 2017 IEEE International Conference on Industrial and Information Systems (ICIIS). :1–6.

This work introduces concepts and algorithms along with a case study validating them, to enhance the event detection, pattern recognition and anomaly identification results in real life video surveillance. The motivation for the work underlies in the observation that human behavioral patterns in general continuously evolve and adapt with time, rather than being static. First, limitations in existing work with respect to this phenomena are identified. Accordingly, the notion and algorithms of Dynamic Clustering are introduced in order to overcome these drawbacks. Correspondingly, we propose the concept of maintaining two separate sets of data in parallel, namely the Normal Plane and the Anomaly Plane, to successfully achieve the task of learning continuously. The practicability of the proposed algorithms in a real life scenario is demonstrated through a case study. From the analysis presented in this work, it is evident that a more comprehensive analysis, closely following human perception can be accomplished by incorporating the proposed notions and algorithms in a video surveillance event.