Biblio
A new approach to micro-Doppler signal analysis is presented in this article. Novel chirp rate estimators in the time-frequency domain were used for this purpose, which provided the chirp rate of micro-Doppler signatures, allowing the classification of objects in the urban environment. As an example verifying the method, a signal from a high-resolution radar with a linear frequency modulated continuous wave (FMCW) recording an echo reflected from a pedestrian was used to validate the proposed algorithms for chirp rate estimation. The obtained results are plotted on saturated accelerograms, giving an additional parameter dedicated for target classification in security systems utilizing radar sensors for target detection.
In painting, humans can draw an interrelation between the style and the content of a given image in order to enhance visual experiences. Deep neural networks like convolutional neural networks are being used to draw a satisfying conclusion of this problem of neural style transfer due to their exceptional results in the key areas of visual perceptions such as object detection and face recognition.In this study, along with style transfer on whole image it is also outlined how transfer of style can be performed only on the specific parts of the content image which is accomplished by using masks. The style is transferred in a way that there is a least amount of loss to the content image i.e., semantics of the image is preserved.
Several computer vision applications such as object detection and face recognition have started to completely rely on deep learning based architectures. These architectures, when paired with appropriate loss functions and optimizers, produce state-of-the-art results in a myriad of problems. On the other hand, with the advent of "blockchain", the cybersecurity industry has developed a new sense of trust which was earlier missing from both the technical and commercial perspectives. Employment of cryptographic hash as well as symmetric/asymmetric encryption and decryption algorithms ensure security without any human intervention (i.e., centralized authority). In this research, we present the synergy between the best of both these worlds. We first propose a model which uses the learned parameters of a typical deep neural network and is secured from external adversaries by cryptography and blockchain technology. As the second contribution of the proposed research, a new parameter tampering attack is proposed to properly justify the role of blockchain in machine learning.
Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.
Neural architectures are the foundation for improving performance of deep neural networks (DNNs). This paper presents deep compositional grammatical architectures which harness the best of two worlds: grammar models and DNNs. The proposed architectures integrate compositionality and reconfigurability of the former and the capability of learning rich features of the latter in a principled way. We utilize AND-OR Grammar (AOG) as network generator in this paper and call the resulting networks AOGNets. An AOGNet consists of a number of stages each of which is composed of a number of AOG building blocks. An AOG building block splits its input feature map into N groups along feature channels and then treat it as a sentence of N words. It then jointly realizes a phrase structure grammar and a dependency grammar in bottom-up parsing the “sentence” for better feature exploration and reuse. It provides a unified framework for the best practices developed in state-of-the-art DNNs. In experiments, AOGNet is tested in the ImageNet-1K classification benchmark and the MS-COCO object detection and segmentation benchmark. In ImageNet-1K, AOGNet obtains better performance than ResNet and most of its variants, ResNeXt and its attention based variants such as SENet, DenseNet and DualPathNet. AOGNet also obtains the best model interpretability score using network dissection. AOGNet further shows better potential in adversarial defense. In MS-COCO, AOGNet obtains better performance than the ResNet and ResNeXt backbones in Mask R-CNN.
Deep learning is the segment of artificial intelligence which is involved with imitating the learning approach that human beings utilize to get some different types of knowledge. Analyzing videos, a part of deep learning is one of the most basic problems of computer vision and multi-media content analysis for at least 20 years. The job is very challenging as the video contains a lot of information with large differences and difficulties. Human supervision is still required in all surveillance systems. New advancement in computer vision which are observed as an important trend in video surveillance leads to dramatic efficiency gains. We propose a CCTV based theft detection along with tracking of thieves. We use image processing to detect theft and motion of thieves in CCTV footage, without the use of sensors. This system concentrates on object detection. The security personnel can be notified about the suspicious individual committing burglary using Real-time analysis of the movement of any human from CCTV footage and thus gives a chance to avert the same.
Deep learning has undergone tremendous advancements in computer vision studies. The training of deep learning neural networks depends on a considerable amount of ground truth datasets. However, labeling ground truth data is a labor-intensive task, particularly for large-volume video analytics applications such as video surveillance and vehicles detection for autonomous driving. This paper presents a rapid and accurate method for associative searching in big image data obtained from security monitoring systems. We developed a semi-automatic moving object annotation method for improving deep learning models. The proposed method comprises three stages, namely automatic foreground object extraction, object annotation in subsequent video frames, and dataset construction using human-in-the-loop quick selection. Furthermore, the proposed method expedites dataset collection and ground truth annotation processes. In contrast to data augmentation and data generative models, the proposed method produces a large amount of real data, which may facilitate training results and avoid adverse effects engendered by artifactual data. We applied the constructed annotation dataset to train a deep learning you-only-look-once (YOLO) model to perform vehicle detection on street intersection surveillance videos. Experimental results demonstrated that the accurate detection performance was improved from a mean average precision (mAP) of 83.99 to 88.03.
Safety is one of basic human needs so we need a security system that able to prevent crime happens. Commonly, we use surveillance video to watch environment and human behaviour in a location. However, the surveillance video can only used to record images or videos with no additional information. Therefore we need more advanced camera to get another additional information such as human position and movement. This research were able to extract those information from surveillance video footage by using human detection and tracking algorithm. The human detection framework is based on Deep Learning Convolutional Neural Networks which is a very popular branch of artificial intelligence. For tracking algorithms, channel and spatial correlation filter is used to track detected human. This system will generate and export tracked movement on footage as an additional information. This tracked movement can be analysed furthermore for another research on surveillance video problems.
Person re-identification(Person Re-ID) means that images of a pedestrian from cameras in a surveillance camera network can be automatically retrieved based on one of this pedestrian's image from another camera. The appearance change of pedestrians under different cameras poses a huge challenge to person re-identification. Person re-identification systems based on deep learning can effectively extract the appearance features of pedestrians. In this paper, the feature enhancement experiment is conducted, and the result showed that the current person reidentification datasets are relatively small and cannot fully meet the need of deep training. Therefore, this paper studied the method of using generative adversarial network to extend the person re-identification datasets and proposed a label smoothing regularization for outliers with weight (LSROW) algorithm to make full use of the generated data, effectively improved the accuracy of person re-identification.
Multi-tag identification technique has been applied widely in the RFID system to increase flexibility of the system. However, it also brings serious tags collision issues, which demands the efficient anti-collision schemes. In this paper, we propose a Multi-target tags assignment slots algorithm based on Hash function (MTSH) for efficient multi-tag identification. The proposed algorithm can estimate the number of tags and dynamically adjust the frame length. Specifically, according to the number of tags, the proposed algorithm is composed of two cases. when the number of tags is small, a hash function is constructed to map the tags into corresponding slots. When the number of tags is large, the tags are grouped and randomly mapped into slots. During the tag identification, tags will be paired with a certain matching rate and then some tags will exit to improve the efficiency of the system. The simulation results indicate that the proposed algorithm outperforms the traditional anti-collision algorithms in terms of the system throughput, stability and identification efficiency.
This paper work is focused on Performance comparison of intrusion detection system between DBN Algorithm and SPELM Algorithm. Researchers have used this new algorithm SPELM to perform experiments in the area of face recognition, pedestrian detection, and for network intrusion detection in the area of cyber security. The scholar used the proposed State Preserving Extreme Learning Machine(SPELM) algorithm as machine learning classifier and compared it's performance with Deep Belief Network (DBN) algorithm using NSL KDD dataset. The NSL- KDD dataset has four lakhs of data record; out of which 40% of data were used for training purposes and 60% data used in testing purpose while calculating the performance of both the algorithms. The experiment as performed by the scholar compared the Accuracy, Precision, recall and Computational Time of existing DBN algorithm with proposed SPELM Algorithm. The findings have show better performance of SPELM; when compared its accuracy of 93.20% as against 52.8% of DBN algorithm;69.492 Precision of SPELM as against 66.836 DBN and 90.8 seconds of Computational time taken by SPELM as against 102 seconds DBN Algorithm.
The usage of small drones/UAVs has significantly increased recently. Consequently, there is a rising potential of small drones being misused for illegal activities such as terrorism, smuggling of drugs, etc. posing high-security risks. Hence, tracking and surveillance of drones are essential to prevent security breaches. The similarity in the appearance of small drone and birds in complex background makes it challenging to detect drones in surveillance videos. This paper addresses the challenge of detecting small drones in surveillance videos using popular and advanced deep learning-based object detection methods. Different CNN-based architectures such as ResNet-101 and Inception with Faster-RCNN, as well as Single Shot Detector (SSD) model was used for experiments. Due to sparse data available for experiments, pre-trained models were used while training the CNNs using transfer learning. Best results were obtained from experiments using Faster-RCNN with the base architecture of ResNet-101. Experimental analysis on different CNN architectures is presented in the paper, along with the visual analysis of the test dataset.
This article shows the possibility of detection of the hidden information in images. This is the approach to steganalysis than the basic data about the image and the information about the hiding method of the information are unknown. The architecture of the convolutional neural network makes it possible to detect small changes in the image with high probability.
Now-a-days, video steganography has developed for a secured communication among various users. The two important factor of steganography method are embedding potency and embedding payload. Here, a Multiple Object Tracking (MOT) algorithmic programs used to detect motion object, also shows foreground mask. Discrete wavelet Transform (DWT) and Discrete Cosine Transform (DCT) are used for message embedding and extraction stage. In existing system Least significant bit method was proposed. This technique of hiding data may lose some data after some file transformation. The suggested Multiple object tracking algorithm increases embedding and extraction speed, also protects secret message against various attackers.
CFRS (Collaborative Filtering Recommendation System) is one of the most widely used individualized recommendation systems. However, CFRS is susceptible to shilling attacks based on profile injection. The current research on shilling attack mainly focuses on the recognition of false user profiles, but these methods depend on the specific attack models and the computational cost is huge. From the view of item, some abnormal item detection methods are proposed which are independent of attack models and overcome the defects of user profiles model, but its detection rate, false alarm rate and time overhead need to be further improved. In order to solve these problems, it proposes an abnormal item detection method based on time window merging. This method first uses the small window to partition rating time series, and determine whether the window is suspicious in terms of the number of abnormal ratings within it. Then, the suspicious small windows are merged to form suspicious intervals. We use the rating distribution characteristics RAR (Ratio of Abnormal Rating), ATIAR (Average Time Interval of Abnormal Rating), DAR(Deviation of Abnormal Rating) and DTIAR (Deviation of Time Interval of Abnormal Rating) in the suspicious intervals to determine whether the item is subject to attacks. Experiment results on the MovieLens 100K data set show that the method has a high detection rate and a low false alarm rate.
Today, as surveillance systems are widely used for indoor and outdoor monitoring applications, there is a growing interest in real-time generation detection and there are many different applications for real-time generation detection and analysis. Two-dimensional videos; It is used in multimedia content-based indexing, information acquisition, visual surveillance and distributed cross-camera surveillance systems, human tracking, traffic monitoring and similar applications. It is of great importance for the development of systems for national security by following a moving target within the scope of military applications. In this research, a more efficient solution is proposed in addition to the existing methods. Therefore, we present YOLO, a new approach to object detection for military applications.