Biblio
Proof of integrity in produced video data by surveillance cameras requires active forensic methods such as signatures, otherwise authenticity and integrity can be comprised and data becomes unusable e. g. for legal evidence. But a simple file- or stream-signature loses its validity when the stream is cut in parts or by separating data and signature. Using the principles of security in distributed systems similar to those of blockchain and distributed ledger technologies (BC/DLT), a chain which consists of the frames of a video which frame hash values will be distributed among a camera sensor network is presented. The backbone of this Framechain within the camera sensor network will be a camera identity concept to ensure accountability, integrity and authenticity according to the extended CIA triad security concept. Modularity by secure sequences, autarky in proof and robustness against natural modulation of data are the key parameters of this new approach. It allows the standalone data and even parts of it to be used as hard evidence.
Despite advances regarding autonomous functionality for robots, teleoperation remains a means for performing delicate tasks in safety critical contexts like explosive ordnance disposal (EOD) and ambiguous environments. Immersive stereoscopic displays have been proposed and developed in this regard, but bring about their own specific problems, e.g., simulator sickness. This work builds upon standardized test environments to yield reproducible comparisons between different robotic platforms. The focus was placed on testing three optronic systems of differing degrees of immersion: (1) A laptop display showing multiple monoscopic camera views, (2) an off-the-shelf virtual reality headset coupled with a pantilt-based stereoscopic camera, and (3) a so-called Telepresence Unit, providing fast pan, tilt, yaw rotation, stereoscopic view, and spatial audio. Stereoscopic systems yielded significant faster task completion only for the maneuvering task. As expected, they also induced Simulator Sickness among other results. However, the amount of Simulator Sickness varied between both stereoscopic systems. Collected data suggests that a higher degree of immersion combined with careful system design can reduce the to-be-expected increase of Simulator Sickness compared to the monoscopic camera baseline while making the interface subjectively more effective for certain tasks.
This article describes the development of two mobile applications for learning Digital Electronics. The first application is an interactive app for iOS where you can study the different digital circuits, and which will serve as the basis for the second: a game of questions in augmented reality.
This paper describes a realisation of a ResNet face recognition method through Zigbee-based wireless protocol. The system uses a CC2530 Zigbee-based radio frequency chip with connected VC0706 camera on it. The Arduino Nano had been used for organisation of data compression and effective division of Zigbee packets. The proposed solution also simplifies a data transmission within a strict bandwidth of Zigbee protocol and reliable packet forwarding in case of frequency distortion. The following investigation model uses Raspberry Pi 3 with connected Zigbee End Device (ZED) for successful receiving of important images and acceleration of deep learning interfaces. The model is integrated into a smart security system based on Zigbee modules, MySQL database, Android application and works in the background by using daemons procedures. To protect data, all wireless connections had been encrypted by the 128-bit Advanced Encryption Standard (AES-128) algorithm. Experimental results show a possibility to implement complex systems under restricted requirements of available transmission protocols.
Increased availability of mobile cameras has led to more opportunities for people to record videos of significantly more of their lives. Many times people want to share these videos, but only to certain people who were co-present. Since the videos may be of a large event where the attendees are not necessarily known, we need a method for proving co-presence without revealing information before co-presence is proven. In this demonstration, we present a privacy-preserving method for comparing the similarity of two videos without revealing the contents of either video. This technique leverages the Similarity of Simultaneous Observation technique for detecting hidden webcams and modifies the existing algorithms so that they are computationally feasible to run under fully homomorphic encryption scheme on modern mobile devices. The demonstration will consist of a variety of devices preloaded with our software. We will demonstrate the video sharing software performing comparisons in real time. We will also make the software available to Android devices via a QR code so that participants can record and exchange their own videos.
In the past air-gapped systems that are isolated from networks have been considered to be very secure. Yet there have been reports of such systems being breached. These breaches have shown to use unconventional means for communication also known as covert channels such as Acoustic, Electromagnetic, Magnetic, Electric, Optical, and Thermal to transfer data. In this paper, a review of various attack methods that can compromise an air-gapped system is presented along with a summary of how efficient and dangerous a particular method could be. The capabilities of each covert channel are listed to better understand the threat it poses and also some countermeasures to safeguard against such attack methods are mentioned. These attack methods have already been proven to work and awareness of such covert channels for data exfiltration is crucial in various industries.
Recently, smart video security systems have been active. The existing video security system is mainly a method of detecting a local abnormality of a unit camera. In this case, it is difficult to obtain the characteristics of each local region and the situation for the entire watching area. In this paper, we developed an object map for the entire surveillance area using a combination of surveillance cameras, and developed an algorithm to detect anomalies by learning normal situations. The surveillance camera in each area detects and tracks people and cars, and creates a local object map and transmits it to the server. The surveillance server combines each local maps to generate a global map for entire areas. Probability maps were automatically calculated from the global maps, and normal and abnormal decisions were performed through trained data about normal situations. For three reporting status: normal, caution, and warning, and the caution report performance shows that normal detection 99.99% and abnormal detection 86.6%.
A novel deep neural network is proposed, for accurate and robust crowd counting. Crowd counting is a complex task, as it strongly depends on the deployed camera characteristics and, above all, the scene perspective. Crowd counting is essential in security applications where Internet of Things (IoT) cameras are deployed to help with crowd management tasks. The complexity of a scene varies greatly, and a medium to large scale security system based on IoT cameras must cater for changes in perspective and how people appear from different vantage points. To address this, our deep architecture extracts multi-scale features with a pyramid contextual module to provide long-range contextual information and enlarge the receptive field. Experiments were run on three major crowd counting datasets, to test our proposed method. Results demonstrate our method supersedes the performance of state-of-the-art methods.
Video Surveillance plays a pivotal role in today's world. The technologies have been advanced too much when artificial intelligence, machine learning and deep learning pitched into the system. Using above combinations, different systems are in place which helps to differentiate various suspicious behaviors from the live tracking of footages. The most unpredictable one is human behaviour and it is very difficult to find whether it is suspicious or normal. Deep learning approach is used to detect suspicious or normal activity in an academic environment, and which sends an alert message to the corresponding authority, in case of predicting a suspicious activity. Monitoring is often performed through consecutive frames which are extracted from the video. The entire framework is divided into two parts. In the first part, the features are computed from video frames and in second part, based on the obtained features classifier predict the class as suspicious or normal.
Robot Operating System (ROS) is becoming more and more important and is used widely by developers and researchers in various domains. One of the most important fields where it is being used is the self-driving cars industry. However, this framework is far from being totally secure, and the existing security breaches do not have robust solutions. In this paper we focus on the camera vulnerabilities, as it is often the most important source for the environment discovery and the decision-making process. We propose an unsupervised anomaly detection tool for detecting suspicious frames incoming from camera flows. Our solution is based on spatio-temporal autoencoders used to truthfully reconstruct the camera frames and detect abnormal ones by measuring the difference with the input. We test our approach on a real-word dataset, i.e. flows coming from embedded cameras of self-driving cars. Our solution outperforms the existing works on different scenarios.
To bring a uniform development platform which seamlessly combines hardware components and software architecture of various developers across the globe and reduce the complexity in producing robots which help people in their daily ergonomics. ROS has come out to be a game changer. It is disappointing to see the lack of penetration of technology in different verticals which involve protection, defense and security. By leveraging the power of ROS in the field of robotic automation and computer vision, this research will pave path for identification of suspicious activity with autonomously moving bots which run on ROS. The research paper proposes and validates a flow where ROS and computer vision algorithms like YOLO can fall in sync with each other to provide smarter and accurate methods for indoor and limited outdoor patrolling. Identification of age,`gender, weapons and other elements which can disturb public harmony will be an integral part of the research and development process. The simulation and testing reflects the efficiency and speed of the designed software architecture.
The Robot Operating System (ROS) are being deployed for multiple life critical activities such as self-driving cars, drones, and industries. However, the security has been persistently neglected, especially the image flows incoming from camera robots. In this paper, we perform a structured security assessment of robot cameras using ROS. We points out a relevant number of security flaws that can be used to take over the flows incoming from the robot cameras. Furthermore, we propose an intrusion detection system to detect abnormal flows. Our defense approach is based on images comparisons and unsupervised anomaly detection method. We experiment our approach on robot cameras embedded on a self-driving car.
In this paper, the design as well as complete implementation of a robot which can be autonomously controlled for surveillance. It can be seamlessly integrated into an existing security system already present. The robot's inherent ability allows it to map the interiors of an unexplored building and steer autonomously using its self-ruling and pilot feature. It uses a 2D LIDAR to map its environment in real-time and HD camera records suspicious activity. It also features an in-built display with touch based commands and voice recognition that enables people to interact with the robot during any situation.
Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.
In this study we propose a novel method for drone surveillance that can simultaneously analyze time-frequency responses in all pixels of a high-frame-rate video. The propellers of flying drones rotate at hundreds of Hz and their principal vibration frequency components are much higher than those of their background objects. To separate the pixels around a drone's propellers from its background, we utilize these time-series features for vibration source localization with pixel-level short-time Fourier transform (STFT). We verify the relationship between the number of taps in the STFT computation and the performance of our algorithm, including the execution time and the localization accuracy, by conducting experiments under various conditions, such as degraded appearance, weather, and defocused blur. The robustness of the proposed algorithm is also verified by localizing a flying multi-copter in real-time in an outdoor scenario.
To meet the high requirement of human-machine interaction, quadruped robots with human recognition and tracking capability are studied in this paper. We first introduce a marker recognition system which uses multi-thread laser scanner and retro-reflective markers to distinguish the robot's leader and other objects. When the robot follows leader autonomously, the variant A* algorithm which having obstacle grids extended virtually (EA*) is used to plan the path. But if robots need to track and follow the leader's path as closely as possible, it will trust that the path which leader have traveled is safe enough and uses the incremental form of EA* algorithm (IEA*) to reproduce the trajectory. The simulation and experiment results illustrate the feasibility and effectiveness of the proposed algorithms.
Digital microfluidic biochips (DMFBs) have become popular in the healthcare industry recently because of its lowcost, high-throughput, and portability. Users can execute the experiments on biochips with high resolution, and the biochips market therefore grows significantly. However, malicious attackers exploit Intellectual Property (IP) piracy and Trojan attacks to gain illegal profits. The conventional approaches present defense mechanisms that target either IP piracy or Trojan attacks. In practical, DMFBs may suffer from the threat of being attacked by these two attacks at the same time. This paper presents a comprehensive security system to protect DMFBs from IP piracy and Trojan attacks. We propose an authentication mechanism to protect IP and detect errors caused by Trojans with CCD cameras. By our security system, we could generate secret keys for authentication and determine whether the bioassay is under the IP piracy and Trojan attacks. Experimental results demonstrate the efficacy of our security system without overhead of the bioassay completion time.