Biblio
Internet of Things (IoT) and its applications are becoming commonplace with more devices, but always at risk of network security. It is therefore crucial for an IoT network design to identify attackers accurately, quickly and promptly. Many solutions have been proposed, mainly concerning secure IoT architectures and classification algorithms, but none of them have paid enough attention to reducing the complexity. Our proposal in this paper is an edge-cloud architecture that fulfills the detection task right at the edge layer, near the source of the attacks for quick response, versatility, as well as reducing the cloud's workload. We also propose a multi-attack detection mechanism called LCHA (Low-Complexity detection solution with High Accuracy) , which has low complexity for deployment at the edge zone while still maintaining high accuracy. The performance of our proposed mechanism is compared with that of other machine learning and deep learning methods using the most updated BoT-IoT data set. The results show that LCHA outperforms other algorithms such as NN, CNN, RNN, KNN, SVM, KNN, RF and Decision Tree in terms of accuracy and NN in terms of complexity.
The design of attacks for cyber physical systems is critical to assess CPS resilience at design time and run-time, and to generate rich datasets from testbeds for research. Attacks against cyber physical systems distinguish themselves from IT attacks in that the main objective is to harm the physical system. Therefore, both cyber and physical system knowledge are needed to design such attacks. The current practice to generate attacks either focuses on the cyber part of the system using IT cyber security existing body of knowledge, or uses heuristics to inject attacks that could potentially harm the physical process. In this paper, we present a systematic approach to automatically generate integrity attacks from the CPS safety and control specifications, without knowledge of the physical system or its dynamics. The generated attacks violate the system operational and safety requirements, hence present a genuine test for system resilience. We present an algorithm to automate the malware payload development. Several examples are given throughout the paper to illustrate the proposed approach.
Differential privacy is a concept to quantity the disclosure of private information that is controlled by the privacy parameter ε. However, an intuitive interpretation of ε is needed to explain the privacy loss to data engineers and data subjects. In this paper, we conduct a worst-case study of differential privacy risks. We generalize an existing model and reduce complexity to provide more understandable statements on the privacy loss. To this end, we analyze the impact of parameters and introduce the notion of a global privacy risk and global privacy leak.
With the recognition of cyberspace as an operating domain, concerted effort is now being placed on addressing it in the whole-of-domain manner found in land, sea, undersea, air, and space domains. Among the first steps in this effort is applying the standard supporting concepts of security, defense, and deterrence to the cyber domain. This paper presents an architecture that helps realize forward defense in cyberspace, wherein adversarial actions are repulsed as close to the origin as possible. However, substantial work remains in making the architecture an operational reality including furthering fundamental research cyber science, conducting design trade-off analysis, and developing appropriate public policy frameworks.
Federated learning is a distributed learning technique where machine learning models are trained on client devices in which the local training data resides. The training is coordinated via a central server which is, typically, controlled by the intended owner of the resulting model. By avoiding the need to transport the training data to the central server, federated learning improves privacy and efficiency. But it raises the risk of model theft by clients because the resulting model is available on every client device. Even if the application software used for local training may attempt to prevent direct access to the model, a malicious client may bypass any such restrictions by reverse engineering the application software. Watermarking is a well-known deterrence method against model theft by providing the means for model owners to demonstrate ownership of their models. Several recent deep neural network (DNN) watermarking techniques use backdooring: training the models with additional mislabeled data. Backdooring requires full access to the training data and control of the training process. This is feasible when a single party trains the model in a centralized manner, but not in a federated learning setting where the training process and training data are distributed among several client devices. In this paper, we present WAFFLE, the first approach to watermark DNN models trained using federated learning. It introduces a retraining step at the server after each aggregation of local models into the global model. We show that WAFFLE efficiently embeds a resilient watermark into models incurring only negligible degradation in test accuracy (-0.17%), and does not require access to training data. We also introduce a novel technique to generate the backdoor used as a watermark. It outperforms prior techniques, imposing no communication, and low computational (+3.2%) overhead$^\textrm1$$^\textrm1$\$The research report version of this paper is also available in https://arxiv.org/abs/2008.07298, and the code for reproducing our work can be found at https://github.com/ssg-research/WAFFLE.
Physical protection system (PPS) is developed to protect the assets or facilities against threats. A systematic analysis of the capabilities and intentions of potential threat capabilities is needed resulting in a so-called Design Basis Threat (DBT) document. A proper development of DBT is important to identify the system requirements that are required for adequately protecting a system and to optimize the resources needed for the PPS. In this paper we propose a model-based systems engineering approach for developing a DBT based on feature models. Based on a domain analysis process, we provide a metamodel that defines the key concepts needed for developing DBT. Subsequently, a reusable family feature model for PPS is provided that includes the common and variant properties of the PPS concepts detection, deterrence and response. The configuration processes are modeled to select and analyze the required features for implementing the threat scenarios. Finally, we discuss the integration of the DBT with the PPS design process.