Biblio
With the wide application of modern robots, more concerns have been raised on security and privacy of robotic systems and applications. Although the Robot Operating System (ROS) is commonly used on different robots, there have been few work considering the security aspects of ROS. As ROS does not employ even the basic permission control mechanism, applications can access any resources without limitation, which could result in equipment damage, harm to human, as well as privacy leakage. In this paper we propose an access control mechanism for ROS based on an extended policy-based access control (PBAC) model. Specifically, we extend ROS to add an additional node dedicated for access control so that it can provide user identity and permission management services. The proposed mechanism also allows the administrator to revoke a permission dynamically. We implemented the proposed method in ROS and demonstrated its applicability and performance through several case studies.
The difficult of detecting, response, tracing the malicious behavior in cloud has brought great challenges to the law enforcement in combating cybercrimes. This paper presents a malicious behavior oriented framework of detection, emergency response, traceability, and digital forensics in cloud environment. A cloud-based malicious behavior detection mechanism based on SDN is constructed, which implements full-traffic flow detection technology and malicious virtual machine detection based on memory analysis. The emergency response and traceability module can clarify the types of the malicious behavior and the impacts of the events, and locate the source of the event. The key nodes and paths of the infection topology or propagation path of the malicious behavior will be located security measure will be dispatched timely. The proposed IaaS service based forensics module realized the virtualization facility memory evidence extraction and analysis techniques, which can solve volatile data loss problems that often happened in traditional forensic methods.
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.
In the smart grid, residents' electricity usage needs to be periodically measured and reported for the purpose of better energy management. At the same time, real-time collection of residents' electricity consumption may unfavorably incur privacy leakage, which has motivated the research on privacy-preserving aggregation of electricity readings. Most previous studies either rely on a trusted third party (TTP) or suffer from expensive computation. In this paper, we first reveal the privacy flaws of a very recent scheme pursing privacy preservation without relying on the TTP. By presenting concrete attacks, we show that this scheme has failed to meet the design goals. Then, for better privacy protection, we construct a new scheme called PMDA, which utilizes Shamir's secret sharing to allow smart meters to negotiate aggregation parameters in the absence of a TTP. Using only lightweight cryptography, PMDA efficiently supports multi-functional aggregation of the electricity readings, and simultaneously preserves residents' privacy. Theoretical analysis is provided with regard to PMDA's security and efficiency. Moreover, experimental data obtained from a prototype indicates that our proposal is efficient and feasible for practical deployment.
It is well known that distributed cyber attacks simultaneously launched from many hosts have caused the most serious problems in recent years including problems of privacy leakage and denial of services. Thus, how to detect those attacks at early stage has become an important and urgent topic in the cyber security community. For this purpose, recognizing C&C (Command & Control) communication between compromised bots and the C&C server becomes a crucially important issue, because C&C communication is in the preparation phase of distributed attacks. Although attack detection based on signature has been practically applied since long ago, it is well-known that it cannot efficiently deal with new kinds of attacks. In recent years, ML(Machine learning)-based detection methods have been studied widely. In those methods, feature selection is obviously very important to the detection performance. We once utilized up to 55 features to pick out C&C traffic in order to accomplish early detection of DDoS attacks. In this work, we try to answer the question that "Are all of those features really necessary?" We mainly investigate how the detection performance moves as the features are removed from those having lowest importance and we try to make it clear that what features should be payed attention for early detection of distributed attacks. We use honeypot data collected during the period from 2008 to 2013. SVM(Support Vector Machine) and PCA(Principal Component Analysis) are utilized for feature selection and SVM and RF(Random Forest) are for building the classifier. We find that the detection performance is generally getting better if more features are utilized. However, after the number of features has reached around 40, the detection performance will not change much even more features are used. It is also verified that, in some specific cases, more features do not always means a better detection performance. We also discuss 10 important features which have the biggest influence on classification.
The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people accessing key-based security systems. Existing methods of obtaining such secret information relies on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user's fine-grained hand movements, which enable attackers to reproduce the trajectories of the user's hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user's hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 5000 key entry traces collected from 20 adults for key-based security systems (i.e. ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80% accuracy with only one try and more than 90% accuracy with three tries, which to our knowledge, is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.
In data analysis, it is always a tough task to strike the balance between the privacy and the applicability of the data. Due to the demand for individual privacy, the data are being more or less obscured before being released or outsourced to avoid possible privacy leakage. This process is so called de-identification. To discuss a de-identification policy, the most important two aspects should be the re-identification risk and the information loss. In this paper, we introduce a novel policy searching method to efficiently find out proper de-identification policies according to acceptable re-identification risk while retaining the information resided in the data. With the UCI Machine Learning Repository as our real world dataset, the re-identification risk can therefore be able to reflect the true risk of the de-identified data under the de-identification policies. Moreover, using the proposed algorithm, one can then efficiently acquire policies with higher information entropy.