Visible to the public Biblio

Found 124 results

Filters: Keyword is Cameras  [Clear All Filters]
2021-02-23
Wöhnert, S.-J., Wöhnert, K. H., Almamedov, E., Skwarek, V..  2020.  Trusted Video Streams in Camera Sensor Networks. 2020 IEEE 18th International Conference on Embedded and Ubiquitous Computing (EUC). :17—24.

Proof of integrity in produced video data by surveillance cameras requires active forensic methods such as signatures, otherwise authenticity and integrity can be comprised and data becomes unusable e. g. for legal evidence. But a simple file- or stream-signature loses its validity when the stream is cut in parts or by separating data and signature. Using the principles of security in distributed systems similar to those of blockchain and distributed ledger technologies (BC/DLT), a chain which consists of the frames of a video which frame hash values will be distributed among a camera sensor network is presented. The backbone of this Framechain within the camera sensor network will be a camera identity concept to ensure accountability, integrity and authenticity according to the extended CIA triad security concept. Modularity by secure sequences, autarky in proof and robustness against natural modulation of data are the key parameters of this new approach. It allows the standalone data and even parts of it to be used as hard evidence.

2021-02-03
Illing, B., Westhoven, M., Gaspers, B., Smets, N., Brüggemann, B., Mathew, T..  2020.  Evaluation of Immersive Teleoperation Systems using Standardized Tasks and Measurements. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :278—285.

Despite advances regarding autonomous functionality for robots, teleoperation remains a means for performing delicate tasks in safety critical contexts like explosive ordnance disposal (EOD) and ambiguous environments. Immersive stereoscopic displays have been proposed and developed in this regard, but bring about their own specific problems, e.g., simulator sickness. This work builds upon standardized test environments to yield reproducible comparisons between different robotic platforms. The focus was placed on testing three optronic systems of differing degrees of immersion: (1) A laptop display showing multiple monoscopic camera views, (2) an off-the-shelf virtual reality headset coupled with a pantilt-based stereoscopic camera, and (3) a so-called Telepresence Unit, providing fast pan, tilt, yaw rotation, stereoscopic view, and spatial audio. Stereoscopic systems yielded significant faster task completion only for the maneuvering task. As expected, they also induced Simulator Sickness among other results. However, the amount of Simulator Sickness varied between both stereoscopic systems. Collected data suggests that a higher degree of immersion combined with careful system design can reduce the to-be-expected increase of Simulator Sickness compared to the monoscopic camera baseline while making the interface subjectively more effective for certain tasks.

Martin, S., Parra, G., Cubillo, J., Quintana, B., Gil, R., Perez, C., Castro, M..  2020.  Design of an Augmented Reality System for Immersive Learning of Digital Electronic. 2020 XIV Technologies Applied to Electronics Teaching Conference (TAEE). :1—6.

This article describes the development of two mobile applications for learning Digital Electronics. The first application is an interactive app for iOS where you can study the different digital circuits, and which will serve as the basis for the second: a game of questions in augmented reality.

2021-01-28
Romashchenko, V., Brutscheck, M., Chmielewski, I..  2020.  Organisation and Implementation of ResNet Face Recognition Architectures in the Environment of Zigbee-based Data Transmission Protocol. 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA). :25—30.

This paper describes a realisation of a ResNet face recognition method through Zigbee-based wireless protocol. The system uses a CC2530 Zigbee-based radio frequency chip with connected VC0706 camera on it. The Arduino Nano had been used for organisation of data compression and effective division of Zigbee packets. The proposed solution also simplifies a data transmission within a strict bandwidth of Zigbee protocol and reliable packet forwarding in case of frequency distortion. The following investigation model uses Raspberry Pi 3 with connected Zigbee End Device (ZED) for successful receiving of important images and acceleration of deep learning interfaces. The model is integrated into a smart security system based on Zigbee modules, MySQL database, Android application and works in the background by using daemons procedures. To protect data, all wireless connections had been encrypted by the 128-bit Advanced Encryption Standard (AES-128) algorithm. Experimental results show a possibility to implement complex systems under restricted requirements of available transmission protocols.

Goswami, U., Wang, K., Nguyen, G., Lagesse, B..  2020.  Privacy-Preserving Mobile Video Sharing using Fully Homomorphic Encryption. 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). :1—3.

Increased availability of mobile cameras has led to more opportunities for people to record videos of significantly more of their lives. Many times people want to share these videos, but only to certain people who were co-present. Since the videos may be of a large event where the attendees are not necessarily known, we need a method for proving co-presence without revealing information before co-presence is proven. In this demonstration, we present a privacy-preserving method for comparing the similarity of two videos without revealing the contents of either video. This technique leverages the Similarity of Simultaneous Observation technique for detecting hidden webcams and modifies the existing algorithms so that they are computationally feasible to run under fully homomorphic encryption scheme on modern mobile devices. The demonstration will consist of a variety of devices preloaded with our software. We will demonstrate the video sharing software performing comparisons in real time. We will also make the software available to Android devices via a QR code so that participants can record and exchange their own videos.

2021-01-25
Naz, M. T., Zeki, A. M..  2020.  A Review of Various Attack Methods on Air-Gapped Systems. 2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT). :1—6.

In the past air-gapped systems that are isolated from networks have been considered to be very secure. Yet there have been reports of such systems being breached. These breaches have shown to use unconventional means for communication also known as covert channels such as Acoustic, Electromagnetic, Magnetic, Electric, Optical, and Thermal to transfer data. In this paper, a review of various attack methods that can compromise an air-gapped system is presented along with a summary of how efficient and dangerous a particular method could be. The capabilities of each covert channel are listed to better understand the threat it poses and also some countermeasures to safeguard against such attack methods are mentioned. These attack methods have already been proven to work and awareness of such covert channels for data exfiltration is crucial in various industries.

2021-01-15
McCloskey, S., Albright, M..  2019.  Detecting GAN-Generated Imagery Using Saturation Cues. 2019 IEEE International Conference on Image Processing (ICIP). :4584—4588.
Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation [1], and show that the network's treatment of exposure is markedly different from a real camera. We further show that this cue can be used to distinguish GAN-generated imagery from camera imagery, including effective discrimination between GAN imagery and real camera images used to train the GAN.
Yang, X., Li, Y., Lyu, S..  2019.  Exposing Deep Fakes Using Inconsistent Head Poses. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8261—8265.
In this paper, we propose a new method to expose AI-generated fake face images or videos (commonly known as the Deep Fakes). Our method is based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images. We perform experiments to demonstrate this phenomenon and further develop a classification method based on this cue. Using features based on this cue, an SVM classifier is evaluated using a set of real face images and Deep Fakes.
2021-01-11
Shin, H. C., Chang, J., Na, K..  2020.  Anomaly Detection Algorithm Based on Global Object Map for Video Surveillance System. 2020 20th International Conference on Control, Automation and Systems (ICCAS). :793—795.

Recently, smart video security systems have been active. The existing video security system is mainly a method of detecting a local abnormality of a unit camera. In this case, it is difficult to obtain the characteristics of each local region and the situation for the entire watching area. In this paper, we developed an object map for the entire surveillance area using a combination of surveillance cameras, and developed an algorithm to detect anomalies by learning normal situations. The surveillance camera in each area detects and tracks people and cars, and creates a local object map and transmits it to the server. The surveillance server combines each local maps to generate a global map for entire areas. Probability maps were automatically calculated from the global maps, and normal and abnormal decisions were performed through trained data about normal situations. For three reporting status: normal, caution, and warning, and the caution report performance shows that normal detection 99.99% and abnormal detection 86.6%.

Khadka, A., Argyriou, V., Remagnino, P..  2020.  Accurate Deep Net Crowd Counting for Smart IoT Video acquisition devices. 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS). :260—264.

A novel deep neural network is proposed, for accurate and robust crowd counting. Crowd counting is a complex task, as it strongly depends on the deployed camera characteristics and, above all, the scene perspective. Crowd counting is essential in security applications where Internet of Things (IoT) cameras are deployed to help with crowd management tasks. The complexity of a scene varies greatly, and a medium to large scale security system based on IoT cameras must cater for changes in perspective and how people appear from different vantage points. To address this, our deep architecture extracts multi-scale features with a pyramid contextual module to provide long-range contextual information and enlarge the receptive field. Experiments were run on three major crowd counting datasets, to test our proposed method. Results demonstrate our method supersedes the performance of state-of-the-art methods.

Amrutha, C. V., Jyotsna, C., Amudha, J..  2020.  Deep Learning Approach for Suspicious Activity Detection from Surveillance Video. 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA). :335—339.

Video Surveillance plays a pivotal role in today's world. The technologies have been advanced too much when artificial intelligence, machine learning and deep learning pitched into the system. Using above combinations, different systems are in place which helps to differentiate various suspicious behaviors from the live tracking of footages. The most unpredictable one is human behaviour and it is very difficult to find whether it is suspicious or normal. Deep learning approach is used to detect suspicious or normal activity in an academic environment, and which sends an alert message to the corresponding authority, in case of predicting a suspicious activity. Monitoring is often performed through consecutive frames which are extracted from the video. The entire framework is divided into two parts. In the first part, the features are computed from video frames and in second part, based on the obtained features classifier predict the class as suspicious or normal.

Bhat, P., Batakurki, M., Chari, M..  2020.  Classifier with Deep Deviation Detection in PoE-IoT Devices. 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). :1–3.
With the rapid growth in diversity of PoE-IoT devices and concept of "Edge intelligence", PoE-IoT security and behavior analysis is the major concern. These PoE-IoT devices lack visibility when the entire network infrastructure is taken into account. The IoT devices are prone to have design faults in their security capabilities. The entire network may be put to risk by attacks on vulnerable IoT devices or malware might get introduced into IoT devices even by routine operations such as firmware upgrade. There have been various approaches based on machine learning(ML) to classify PoE-IoT devices based on network traffic characteristics such as Deep Packet Inspection(DPI). In this paper, we propose a novel method for PoE-IoT classification where ML algorithm, Decision Tree is used. In addition to classification, this method provides useful insights to the network deployment, based on the deviations detected. These insights can further be used for shaping policies, troubleshooting and behavior analysis of PoE-IoT devices.
2020-12-17
Amrouche, F., Lagraa, S., Frank, R., State, R..  2020.  Intrusion detection on robot cameras using spatio-temporal autoencoders: A self-driving car application. 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). :1—5.

Robot Operating System (ROS) is becoming more and more important and is used widely by developers and researchers in various domains. One of the most important fields where it is being used is the self-driving cars industry. However, this framework is far from being totally secure, and the existing security breaches do not have robust solutions. In this paper we focus on the camera vulnerabilities, as it is often the most important source for the environment discovery and the decision-making process. We propose an unsupervised anomaly detection tool for detecting suspicious frames incoming from camera flows. Our solution is based on spatio-temporal autoencoders used to truthfully reconstruct the camera frames and detect abnormal ones by measuring the difference with the input. We test our approach on a real-word dataset, i.e. flows coming from embedded cameras of self-driving cars. Our solution outperforms the existing works on different scenarios.

Maram, S. S., Vishnoi, T., Pandey, S..  2019.  Neural Network and ROS based Threat Detection and Patrolling Assistance. 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP). :1—5.

To bring a uniform development platform which seamlessly combines hardware components and software architecture of various developers across the globe and reduce the complexity in producing robots which help people in their daily ergonomics. ROS has come out to be a game changer. It is disappointing to see the lack of penetration of technology in different verticals which involve protection, defense and security. By leveraging the power of ROS in the field of robotic automation and computer vision, this research will pave path for identification of suspicious activity with autonomously moving bots which run on ROS. The research paper proposes and validates a flow where ROS and computer vision algorithms like YOLO can fall in sync with each other to provide smarter and accurate methods for indoor and limited outdoor patrolling. Identification of age,`gender, weapons and other elements which can disturb public harmony will be an integral part of the research and development process. The simulation and testing reflects the efficiency and speed of the designed software architecture.

Lagraa, S., Cailac, M., Rivera, S., Beck, F., State, R..  2019.  Real-Time Attack Detection on Robot Cameras: A Self-Driving Car Application. 2019 Third IEEE International Conference on Robotic Computing (IRC). :102—109.

The Robot Operating System (ROS) are being deployed for multiple life critical activities such as self-driving cars, drones, and industries. However, the security has been persistently neglected, especially the image flows incoming from camera robots. In this paper, we perform a structured security assessment of robot cameras using ROS. We points out a relevant number of security flaws that can be used to take over the flows incoming from the robot cameras. Furthermore, we propose an intrusion detection system to detect abnormal flows. Our defense approach is based on images comparisons and unsupervised anomaly detection method. We experiment our approach on robot cameras embedded on a self-driving car.

2020-12-15
Shanavas, H., Ahmed, S. A., Hussain, M. H. Safwat.  2018.  Design of an Autonomous Surveillance Robot Using Simultaneous Localization and Mapping. 2018 International Conference on Design Innovations for 3Cs Compute Communicate Control (ICDI3C). :64—68.

In this paper, the design as well as complete implementation of a robot which can be autonomously controlled for surveillance. It can be seamlessly integrated into an existing security system already present. The robot's inherent ability allows it to map the interiors of an unexplored building and steer autonomously using its self-ruling and pilot feature. It uses a 2D LIDAR to map its environment in real-time and HD camera records suspicious activity. It also features an in-built display with touch based commands and voice recognition that enables people to interact with the robot during any situation.

2020-12-11
Vasiliu, V., Sörös, G..  2019.  Coherent Rendering of Virtual Smile Previews with Fast Neural Style Transfer. 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). :66—73.

Coherent rendering in augmented reality deals with synthesizing virtual content that seamlessly blends in with the real content. Unfortunately, capturing or modeling every real aspect in the virtual rendering process is often unfeasible or too expensive. We present a post-processing method that improves the look of rendered overlays in a dental virtual try-on application. We combine the original frame and the default rendered frame in an autoencoder neural network in order to obtain a more natural output, inspired by artistic style transfer research. Specifically, we apply the original frame as style on the rendered frame as content, repeating the process with each new pair of frames. Our method requires only a single forward pass, our shallow architecture ensures fast execution, and our internal feedback loop inherently enforces temporal consistency.

Fujiwara, N., Shimasaki, K., Jiang, M., Takaki, T., Ishii, I..  2019.  A Real-time Drone Surveillance System Using Pixel-level Short-time Fourier Transform. 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :303—308.

In this study we propose a novel method for drone surveillance that can simultaneously analyze time-frequency responses in all pixels of a high-frame-rate video. The propellers of flying drones rotate at hundreds of Hz and their principal vibration frequency components are much higher than those of their background objects. To separate the pixels around a drone's propellers from its background, we utilize these time-series features for vibration source localization with pixel-level short-time Fourier transform (STFT). We verify the relationship between the number of taps in the STFT computation and the performance of our algorithm, including the execution time and the localization accuracy, by conducting experiments under various conditions, such as degraded appearance, weather, and defocused blur. The robustness of the proposed algorithm is also verified by localizing a flying multi-copter in real-time in an outdoor scenario.

2020-12-01
Zhang, H., Liu, H., Deng, L., Wang, P., Rong, X., Li, Y., Li, B., Wang, H..  2018.  Leader Recognition and Tracking for Quadruped Robots. 2018 IEEE International Conference on Information and Automation (ICIA). :1438—1443.

To meet the high requirement of human-machine interaction, quadruped robots with human recognition and tracking capability are studied in this paper. We first introduce a marker recognition system which uses multi-thread laser scanner and retro-reflective markers to distinguish the robot's leader and other objects. When the robot follows leader autonomously, the variant A* algorithm which having obstacle grids extended virtually (EA*) is used to plan the path. But if robots need to track and follow the leader's path as closely as possible, it will trust that the path which leader have traveled is safe enough and uses the incremental form of EA* algorithm (IEA*) to reproduce the trajectory. The simulation and experiment results illustrate the feasibility and effectiveness of the proposed algorithms.

2020-11-02
Lin, Chun-Yu, Huang, Juinn-Dar, Yao, Hailong, Ho, Tsung-Yi.  2018.  A Comprehensive Security System for Digital Microfluidic Biochips. 2018 IEEE International Test Conference in Asia (ITC-Asia). :151—156.

Digital microfluidic biochips (DMFBs) have become popular in the healthcare industry recently because of its lowcost, high-throughput, and portability. Users can execute the experiments on biochips with high resolution, and the biochips market therefore grows significantly. However, malicious attackers exploit Intellectual Property (IP) piracy and Trojan attacks to gain illegal profits. The conventional approaches present defense mechanisms that target either IP piracy or Trojan attacks. In practical, DMFBs may suffer from the threat of being attacked by these two attacks at the same time. This paper presents a comprehensive security system to protect DMFBs from IP piracy and Trojan attacks. We propose an authentication mechanism to protect IP and detect errors caused by Trojans with CCD cameras. By our security system, we could generate secret keys for authentication and determine whether the bioassay is under the IP piracy and Trojan attacks. Experimental results demonstrate the efficacy of our security system without overhead of the bioassay completion time.

2020-09-28
Gallo, Pierluigi, Pongnumkul, Suporn, Quoc Nguyen, Uy.  2018.  BlockSee: Blockchain for IoT Video Surveillance in Smart Cities. 2018 IEEE International Conference on Environment and Electrical Engineering and 2018 IEEE Industrial and Commercial Power Systems Europe (EEEIC / I CPS Europe). :1–6.
The growing demand for safety in urban environments is supported by monitoring using video surveillance. The need to analyze multiple video-flows from different cameras deployed around the city by heterogeneous owners introduces vulnerabilities and privacy issues. Video frames, timestamps, and camera settings can be digitally manipulated by malicious users; the positions of cameras, their orientation and their mechanical settings can be physically manipulated. Digital and physical manipulations may have several effects, including the change of the observed scene and the potential violation of neighbors' privacy. To face these risks, we introduce BlockSee, a blockchain-based video surveillance system that jointly provides validation and immutability to camera settings and surveillance videos, making them readily available to authorized users in case of events. The encouraging results obtained with BlockSee pave the way to new distributed city-wide monitoring systems.
2020-09-21
Marcinkevicius, Povilas, Bagci, Ibrahim Ethem, Abdelazim, Nema M., Woodhead, Christopher S., Young, Robert J., Roedig, Utz.  2019.  Optically Interrogated Unique Object with Simulation Attack Prevention. 2019 Design, Automation Test in Europe Conference Exhibition (DATE). :198–203.
A Unique Object (UNO) is a physical object with unique characteristics that can be measured externally. The usually analogue measurement can be converted into a digital representation - a fingerprint - which uniquely identifies the object. For practical applications it is necessary that measurements can be performed without the need of specialist equipment or complex measurement setup. Furthermore, a UNO should be able to defeat simulation attacks; an attacker may replace the UNO with a device or system that produces the expected measurement. Recently a novel type of UNOs based on Quantum Dots (QDs) and exhibiting unique photo-luminescence properties has been proposed. The uniqueness of these UNOs is based on quantum effects that can be interrogated using a light source and a camera. The so called Quantum Confinement UNO (QCUNO) responds uniquely to different light excitation levels which is exploited for simulation attack protection, as opposed to focusing on features too small to reproduce and therefore difficult to measure. In this paper we describe methods for extraction of fingerprints from the QCUNO. We evaluate our proposed methods using 46 UNOs in a controlled setup. Focus of the evaluation are entropy, error resilience and the ability to detect simulation attacks.
2020-09-14
Wang, Lizhi, Xiong, Zhiwei, Huang, Hua, Shi, Guangming, Wu, Feng, Zeng, Wenjun.  2019.  High-Speed Hyperspectral Video Acquisition By Combining Nyquist and Compressive Sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence. 41:857–870.
We propose a novel hybrid imaging system to acquire 4D high-speed hyperspectral (HSHS) videos with high spatial and spectral resolution. The proposed system consists of two branches: one branch performs Nyquist sampling in the temporal dimension while integrating the whole spectrum, resulting in a high-frame-rate panchromatic video; the other branch performs compressive sampling in the spectral dimension with longer exposures, resulting in a low-frame-rate hyperspectral video. Owing to the high light throughput and complementary sampling, these two branches jointly provide reliable measurements for recovering the underlying HSHS video. Moreover, the panchromatic video can be used to learn an over-complete 3D dictionary to represent each band-wise video sparsely, thanks to the inherent structural similarity in the spectral dimension. Based on the joint measurements and the self-adaptive dictionary, we further propose a simultaneous spectral sparse (3S) model to reinforce the structural similarity across different bands and develop an efficient computational reconstruction algorithm to recover the HSHS video. Both simulation and hardware experiments validate the effectiveness of the proposed approach. To the best of our knowledge, this is the first time that hyperspectral videos can be acquired at a frame rate up to 100fps with commodity optical elements and under ordinary indoor illumination.
Chatterjee, Urbi, Govindan, Vidya, Sadhukhan, Rajat, Mukhopadhyay, Debdeep, Chakraborty, Rajat Subhra, Mahata, Debashis, Prabhu, Mukesh M..  2019.  Building PUF Based Authentication and Key Exchange Protocol for IoT Without Explicit CRPs in Verifier Database. IEEE Transactions on Dependable and Secure Computing. 16:424–437.
Physically Unclonable Functions (PUFs) promise to be a critical hardware primitive to provide unique identities to billions of connected devices in Internet of Things (IoTs). In traditional authentication protocols a user presents a set of credentials with an accompanying proof such as password or digital certificate. However, IoTs need more evolved methods as these classical techniques suffer from the pressing problems of password dependency and inability to bind access requests to the “things” from which they originate. Additionally, the protocols need to be lightweight and heterogeneous. Although PUFs seem promising to develop such mechanism, it puts forward an open problem of how to develop such mechanism without needing to store the secret challenge-response pair (CRP) explicitly at the verifier end. In this paper, we develop an authentication and key exchange protocol by combining the ideas of Identity based Encryption (IBE), PUFs and Key-ed Hash Function to show that this combination can help to do away with this requirement. The security of the protocol is proved formally under the Session Key Security and the Universal Composability Framework. A prototype of the protocol has been implemented to realize a secured video surveillance camera using a combination of an Intel Edison board, with a Digilent Nexys-4 FPGA board consisting of an Artix-7 FPGA, together serving as the IoT node. We show, though the stand-alone video camera can be subjected to man-in-the-middle attack via IP-spoofing using standard network penetration tools, the camera augmented with the proposed protocol resists such attacks and it suits aptly in an IoT infrastructure making the protocol deployable for the industry.
2020-08-28
Huang, Bai-Ruei, Lin, Chang Hong, Lee, Chia-Han.  2012.  Mobile augmented reality based on cloud computing. and Identification Anti-counterfeiting, Security. :1—5.
In this paper, we implemented a mobile augmented reality system based on cloud computing. This system uses a mobile device with a camera to capture images of book spines and sends processed features to the cloud. In the cloud, the features are compared with the database and the information of the best matched book would be sent back to the mobile device. The information will then be rendered on the display via augmented reality. In order to reduce the transmission cost, the mobile device is used to perform most of the image processing tasks, such as the preprocessing, resizing, corner detection, and augmented reality rendering. On the other hand, the cloud is used to realize routine but large quantity feature comparisons. Using the cloud as the database also makes the future extension much more easily. For our prototype system, we use an Android smart phone as our mobile device, and Chunghwa Telecoms hicloud as the cloud.