Biblio
The model called CSAI-4-CPS is proposed to characterize the use of Artificial Intelligence in Cybersecurity applied to the context of CPS - Cyber-Physical Systems. The model aims to establish a methodology being able to self-adapt using shared machine learning models, without incurring the loss of data privacy. The model will be implemented in a generic framework, to assess accuracy across different datasets, taking advantage of the federated learning and machine learning approach. The proposed solution can facilitate the construction of new AI cybersecurity tools and systems for CPS, enabling a better assessment and increasing the level of security/robustness of these systems more efficiently.
Federated learning (FL) allows to train a massive amount of data privately due to its decentralized structure. Stochastic gradient descent (SGD) is commonly used for FL due to its good empirical performance, but sensitive user information can still be inferred from weight updates shared during FL iterations. We consider Gaussian mechanisms to preserve local differential privacy (LDP) of user data in the FL model with SGD. The trade-offs between user privacy, global utility, and transmission rate are proved by defining appropriate metrics for FL with LDP. Compared to existing results, the query sensitivity used in LDP is defined as a variable, and a tighter privacy accounting method is applied. The proposed utility bound allows heterogeneous parameters over all users. Our bounds characterize how much utility decreases and transmission rate increases if a stronger privacy regime is targeted. Furthermore, given a target privacy level, our results guarantee a significantly larger utility and a smaller transmission rate as compared to existing privacy accounting methods.
Federated learning is a distributed learning technique where machine learning models are trained on client devices in which the local training data resides. The training is coordinated via a central server which is, typically, controlled by the intended owner of the resulting model. By avoiding the need to transport the training data to the central server, federated learning improves privacy and efficiency. But it raises the risk of model theft by clients because the resulting model is available on every client device. Even if the application software used for local training may attempt to prevent direct access to the model, a malicious client may bypass any such restrictions by reverse engineering the application software. Watermarking is a well-known deterrence method against model theft by providing the means for model owners to demonstrate ownership of their models. Several recent deep neural network (DNN) watermarking techniques use backdooring: training the models with additional mislabeled data. Backdooring requires full access to the training data and control of the training process. This is feasible when a single party trains the model in a centralized manner, but not in a federated learning setting where the training process and training data are distributed among several client devices. In this paper, we present WAFFLE, the first approach to watermark DNN models trained using federated learning. It introduces a retraining step at the server after each aggregation of local models into the global model. We show that WAFFLE efficiently embeds a resilient watermark into models incurring only negligible degradation in test accuracy (-0.17%), and does not require access to training data. We also introduce a novel technique to generate the backdoor used as a watermark. It outperforms prior techniques, imposing no communication, and low computational (+3.2%) overhead$^\textrm1$$^\textrm1$\$The research report version of this paper is also available in https://arxiv.org/abs/2008.07298, and the code for reproducing our work can be found at https://github.com/ssg-research/WAFFLE.
This Innovate Practice Full Paper describes our experience with teaching cybersecurity topics using guided inquiry collaborative learning. The goal is to not only develop the students' in-depth technical knowledge, but also “soft skills” such as communication, attitude, team work, networking, problem-solving and critical thinking. This paper reports our experience with developing and using the Guided Inquiry Collaborative Learning materials on the topics of firewall and IPsec. Pre- and post-surveys were conducted to access the effectiveness of the developed materials and teaching methods in terms of learning outcome, attitudes, learning experience and motivation. Analysis of the survey data shows that students had increased learning outcome, participation in class, and interest with Guided Inquiry Collaborative Learning.
Cloud Storage Service(CSS) provides unbounded, robust file storage capability and facilitates for pay-per-use and collaborative work to end users. But due to security issues like lack of confidentiality, malicious insiders, it has not gained wide spread acceptance to store sensitive information. Researchers have proposed proxy re-encryption schemes for secure data sharing through cloud. Due to advancement of computing technologies and advent of quantum computing algorithms, security of existing schemes can be compromised within seconds. Hence there is a need for designing security schemes which can be quantum computing resistant. In this paper, a secure file sharing scheme through cloud storage using proxy re-encryption technique has been proposed. The proposed scheme is proven to be chosen ciphertext secure(CCA) under hardness of ring-LWE, Search problem using random oracle model. The proposed scheme outperforms the existing CCA secure schemes in-terms of re-encryption time and decryption time for encrypted files which results in an efficient file sharing scheme through cloud storage.