Biblio
In recent years, attacks against cyber-physical systems have become increasingly frequent and widespread. The inventiveness of such attacks increases significantly. In particular, zero-day attacks are widely used. The rapid development of the industrial Internet of things, the expansion of the application areas of service robots, the advent of the Internet of vehicles and the Internet of military things have led to a significant increase of attention to deceptive attacks. Especially great threat is posed by deceptive attacks that do not use hiding malicious components. Such attacks can naturally be used against robotic systems. In this paper, we consider an approach to the development of an intrusion detection system for closed-loop robotic systems. The system is based on an abnormal behavioral pattern detection technique. The system can be used for detection of zero-day deceptive attacks. We provide an experimental comparison of our approach and other behavior-based intrusion detection systems.
A novel deep neural network is proposed, for accurate and robust crowd counting. Crowd counting is a complex task, as it strongly depends on the deployed camera characteristics and, above all, the scene perspective. Crowd counting is essential in security applications where Internet of Things (IoT) cameras are deployed to help with crowd management tasks. The complexity of a scene varies greatly, and a medium to large scale security system based on IoT cameras must cater for changes in perspective and how people appear from different vantage points. To address this, our deep architecture extracts multi-scale features with a pyramid contextual module to provide long-range contextual information and enlarge the receptive field. Experiments were run on three major crowd counting datasets, to test our proposed method. Results demonstrate our method supersedes the performance of state-of-the-art methods.
In the past decades, learning an effective distance metric between pairs of instances has played an important role in the classification and retrieval task, for example, the person identification or malware retrieval in the IoT service. The core motivation of recent efforts focus on improving the metric forms, and already showed promising results on the various applications. However, such models often fail to produce a reliable metric on the ambiguous test set. It happens mainly due to the sampling process of the training set, which is not representative of the distribution of the negative samples, especially the examples that are closer to the boundary of different categories (also called hard negative samples). In this paper, we focus on addressing such problems and propose an adaptive margin deep adversarial metric learning (AMDAML) framework. It exploits numerous common negative samples to generate potential hard (adversarial) negatives and applies them to facilitate robust metric learning. Apart from the previous approaches that typically depend on the search or data augmentation to find hard negative samples, the generation of adversarial negative instances could avoid the limitation of domain knowledge and constraint pairs' amount. Specifically, in order to prevent over fitting or underfitting during the training step, we propose an adaptive margin loss that preserves a flexible margin between the negative (include the adversarial and original) and positive samples. We simultaneously train both the adversarial negative generator and conventional metric objective in an adversarial manner and learn the feature representations that are more precise and robust. The experimental results on practical data sets clearly demonstrate the superiority of AMDAML to representative state-of-the-art metric learning models.
The open-source nature of the Android OS makes it possible for manufacturers to ship custom versions of the OS along with a set of pre-installed apps, often for product differentiation. Some device vendors have recently come under scrutiny for potentially invasive private data collection practices and other potentially harmful or unwanted behavior of the preinstalled apps on their devices. Yet, the landscape of preinstalled software in Android has largely remained unexplored, particularly in terms of the security and privacy implications of such customizations. In this paper, we present the first large- scale study of pre-installed software on Android devices from more than 200 vendors. Our work relies on a large dataset of real-world Android firmware acquired worldwide using crowd-sourcing methods. This allows us to answer questions related to the stakeholders involved in the supply chain, from device manufacturers and mobile network operators to third- party organizations like advertising and tracking services, and social network platforms. Our study allows us to also uncover relationships between these actors, which seem to revolve primarily around advertising and data-driven services. Overall, the supply chain around Android's open source model lacks transparency and has facilitated potentially harmful behaviors and backdoored access to sensitive data and services without user consent or awareness. We conclude the paper with recommendations to improve transparency, attribution, and accountability in the Android ecosystem.
This paper proposes a method for detecting anomalies in video data. A Variational Autoencoder (VAE) is used for reducing the dimensionality of video frames, generating latent space information that is comparable to low-dimensional sensory data (e.g., positioning, steering angle), making feasible the development of a consistent multi-modal architecture for autonomous vehicles. An Adapted Markov Jump Particle Filter defined by discrete and continuous inference levels is employed to predict the following frames and detecting anomalies in new video sequences. Our method is evaluated on different video scenarios where a semi-autonomous vehicle performs a set of tasks in a closed environment.
By analogy to nature, sight is the main integral component of robotic complexes, including unmanned vehicles. In this connection, one of the urgent tasks in the modern development of unmanned vehicles is the solution to the problem of providing security for new advanced systems, algorithms, methods, and principles of space navigation of robots. In the paper, we present an approach to the protection of machine vision systems based on technologies of deep learning. At the heart of the approach lies the “Feature Squeezing” method that works on the phase of model operation. It allows us to detect “adversarial” examples. Considering the urgency and importance of the target process, the features of unmanned vehicle hardware platforms and also the necessity of execution of tasks on detecting of the objects in real-time mode, it was offered to carry out an additional simple computational procedure of localization and classification of required objects in case of crossing a defined in advance threshold of “adversarial” object testing.
Today, Internet of Things (IoT) devices mostly operate in enclosed, proprietary environments. To unfold the full potential of IoT applications, a unifying and permissionless environment is crucial. All IoT devices, even unknown to each other, would be able to trade services and assets across various domains. In order to realize those applications, uniquely resolvable identities are essential. However, quantifiable trust in identities and their authentication are not trivially provided in such an environment due to the absence of a trusted authority. This research presents a new identity and trust framework for IoT devices, based on Distributed Ledger Technology (DLT). IoT devices assign identities to themselves, which are managed publicly and decentralized on the DLT's network as Self Sovereign Identities (SSI). In addition to the Identity Management System (IdMS), the framework provides a Web of Trust (WoT) approach to enable automatic trust rating of arbitrary identities. For the framework we used the IOTA Tangle to access and store data, achieving high scalability and low computational overhead. To demonstrate the feasibility of our framework, we provide a proof-of-concept implementation and evaluate the set objectives for real world applicability as well as the vulnerability against common threats in IdMSs and WoTs.
The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.
The existing anonymized differential privacy model adopts a unified anonymity method, ignoring the difference of personal privacy, which may lead to the problem of excessive or insufficient protection of the original data [1]. Therefore, this paper proposes a personalized k-anonymity model for tuples (PKA) and proposes a differential privacy data publishing algorithm (DPPA) based on personalized anonymity, firstly based on the tuple personality factor set by the user in the original data set. The values are classified and the corresponding privacy protection relevance is calculated. Then according to the tuple personality factor classification value, the data set is clustered by clustering method with different anonymity, and the quasi-identifier attribute of each cluster is aggregated and noise-added to realize anonymized differential privacy; finally merge the subset to get the data set that meets the release requirements. In this paper, the correctness of the algorithm is analyzed theoretically, and the feasibility and effectiveness of the proposed algorithm are verified by comparison with similar algorithms.
We propose a novel attestation architecture for the Internet of Things (IoT). Our distributed attestation network (DAN) utilizes blockchain technology to store and share device information. We present the design of this new attestation architecture as well as a prototype system chosen to emulate an IoT deployment with a network of Raspberry Pi, Infineon TPMs, and a Hyperledger Fabric blockchain.
We investigate if the random feature selection approach proposed in [1] to improve the robustness of forensic detectors to targeted attacks, can be extended to detectors based on deep learning features. In particular, we study the transferability of adversarial examples targeting an original CNN image manipulation detector to other detectors (a fully connected neural network and a linear SVM) that rely on a random subset of the features extracted from the flatten layer of the original network. The results we got by considering three image manipulation detection tasks (resizing, median filtering and adaptive histogram equalization), two original network architectures and three classes of attacks, show that feature randomization helps to hinder attack transferability, even if, in some cases, simply changing the architecture of the detector, or even retraining the detector is enough to prevent the transferability of the attacks.
The mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical topology can cause a delay overhead in information propagation which makes the system vulnerable to double spend attacks. This paper introduces a proximity-aware extensions to the current Bitcoin protocol, named Master Node Based Clustering (MNBC). The ultimate purpose of the proposed protocol is to improve the information propagation delay in the Bitcoin network.
Bluetooth Classic (BT) remains the de facto connectivity technology in car stereo systems, wireless headsets, laptops, and a plethora of wearables, especially for applications that require high data rates, such as audio streaming, voice calling, tethering, etc. Unlike in Bluetooth Low Energy (BLE), where address randomization is a feature available to manufactures, BT addresses are not randomized because they are largely believed to be immune to tracking attacks. We analyze the design of BT and devise a robust de-anonymization technique that hinges on the apparently benign information leaking from frame encoding, to infer a piconet's clock, hopping sequence, and ultimately the Upper Address Part (UAP) of the master device's physical address, which are never exchanged in clear. Used together with the Lower Address Part (LAP), which is present in all frames transmitted, this enables tracking of the piconet master, thereby debunking the privacy guarantees of BT. We validate this attack by developing the first Software-defined Radio (SDR) based sniffer that allows full BT spectrum analysis (79 MHz) and implements the proposed de-anonymization technique. We study the feasibility of privacy attacks with multiple testbeds, considering different numbers of devices, traffic regimes, and communication ranges. We demonstrate that it is possible to track BT devices up to 85 meters from the sniffer, and achieve more than 80% device identification accuracy within less than 1 second of sniffing and 100% detection within less than 4 seconds. Lastly, we study the identified privacy attack in the wild, capturing BT traffic at a road junction over 5 days, demonstrating that our system can re-identify hundreds of users and infer their commuting patterns.
One of the most efficient tool for human face recognition is neural networks. However, the result of recognition can be spoiled by facial expressions and other deviation from the canonical face representation. In this paper, we propose a resampling method of human faces represented by 3D point clouds. The method is based on a non-rigid Iterative Closest Point (ICP) algorithm. To improve the facial recognition performance, we use a combination of the proposed method and convolutional neural network (CNN). Computer simulation results are provided to illustrate the performance of the proposed method.
Electronic voting systems have enhanced efficiency in student elections management in universities, supporting such elections to become less expensive, logistically simple, with higher accuracy levels as compared to manually conducted elections. However, e-voting systems that are confined to campus hall voting inhibits access to eligible voters who are away from campus. This study examined the challenges of lack of wide access and impersonation of voter in the student elections of 2018 in Kabarak University. The main objective of this study was therefore to upgrade the offline electronic voting system through developing a secure online voting system and deploying the system for use in the 2019 student elections at Kabarak University. The resultant system and development process employed demonstrate the applicability of a secure online voting not only in the higher education context, but also in other democracies where infusion of online access and authentication in the voting processes is a requisite.
Indoor localization has been a popular research subject in recent years. Usually, object localization using sound involves devices on the objects, acquiring data from stationary sound sources, or by localizing the objects with external sensors when the object generates sounds. Indoor localization systems using microphones have traditionally also used systems with several microphones, setting the limitations on cost efficiency and required space for the systems. In this paper, the goal is to investigate whether it is possible for a stationary system to localize a silent object in a room, with only one microphone and ambient noise as information carrier. A subtraction method has been combined with a fingerprint technique, to define and distinguish the noise absorption characteristic of the silent object in the frequency domain for different object positions. The absorption characteristics of several positions of the object is taken as comparison references, serving as fingerprints of known positions for an object. With the experiment result, the tentative idea has been verified as feasible, and noise signal based lateral localization of silent objects can be achieved.
Robot Operating System (ROS) is becoming more and more important and is used widely by developers and researchers in various domains. One of the most important fields where it is being used is the self-driving cars industry. However, this framework is far from being totally secure, and the existing security breaches do not have robust solutions. In this paper we focus on the camera vulnerabilities, as it is often the most important source for the environment discovery and the decision-making process. We propose an unsupervised anomaly detection tool for detecting suspicious frames incoming from camera flows. Our solution is based on spatio-temporal autoencoders used to truthfully reconstruct the camera frames and detect abnormal ones by measuring the difference with the input. We test our approach on a real-word dataset, i.e. flows coming from embedded cameras of self-driving cars. Our solution outperforms the existing works on different scenarios.