Visible to the public Enhancing Cyber Security in IoT Systems using FL-based IDS with Differential Privacy

TitleEnhancing Cyber Security in IoT Systems using FL-based IDS with Differential Privacy
Publication TypeConference Paper
Year of Publication2022
AuthorsAnastasakis, Zacharias, Psychogyios, Konstantinos, Velivassaki, Terpsi, Bourou, Stavroula, Voulkidis, Artemis, Skias, Dimitrios, Gonos, Antonis, Zahariadis, Theodore
Conference Name2022 Global Information Infrastructure and Networking Symposium (GIIS)
Keywordsadditive noise, AI, cyber security, Data models, Differential privacy, federated learning, human factors, Internet of Things, performance evaluation, privacy, privacy preservation, pubcrawl, resilience, Resiliency, Scalability, Training
AbstractNowadays, IoT networks and devices exist in our everyday life, capturing and carrying unlimited data. However, increasing penetration of connected systems and devices implies rising threats for cybersecurity with IoT systems suffering from network attacks. Artificial Intelligence (AI) and Machine Learning take advantage of huge volumes of IoT network logs to enhance their cybersecurity in IoT. However, these data are often desired to remain private. Federated Learning (FL) provides a potential solution which enables collaborative training of attack detection model among a set of federated nodes, while preserving privacy as data remain local and are never disclosed or processed on central servers. While FL is resilient and resolves, up to a point, data governance and ownership issues, it does not guarantee security and privacy by design. Adversaries could interfere with the communication process, expose network vulnerabilities, and manipulate the training process, thus affecting the performance of the trained model. In this paper, we present a federated learning model which can successfully detect network attacks in IoT systems. Moreover, we evaluate its performance under various settings of differential privacy as a privacy preserving technique and configurations of the participating nodes. We prove that the proposed model protects the privacy without actually compromising performance. Our model realizes a limited performance impact of only 7% less testing accuracy compared to the baseline while simultaneously guaranteeing security and applicability.
DOI10.1109/GIIS56506.2022.9936912
Citation Keyanastasakis_enhancing_2022