Visible to the public Sybil Attacks and Defense on Differential Privacy based Federated Learning

TitleSybil Attacks and Defense on Differential Privacy based Federated Learning
Publication TypeConference Paper
Year of Publication2021
AuthorsJiang, Yupeng, Li, Yong, Zhou, Yipeng, Zheng, Xi
Conference Name2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)
Date Publishedoct
KeywordsCollaborative Work, composability, Deep Learning, Differential privacy, federated learning, Metrics, Perturbation methods, privacy, pubcrawl, resilience, Resiliency, security, Sybil attack, sybil attacks, Training
AbstractIn federated learning, machine learning and deep learning models are trained globally on distributed devices. The state-of-the-art privacy-preserving technique in the context of federated learning is user-level differential privacy. However, such a mechanism is vulnerable to some specific model poisoning attacks such as Sybil attacks. A malicious adversary could create multiple fake clients or collude compromised devices in Sybil attacks to mount direct model updates manipulation. Recent works on novel defense against model poisoning attacks are difficult to detect Sybil attacks when differential privacy is utilized, as it masks clients' model updates with perturbation. In this work, we implement the first Sybil attacks on differential privacy based federated learning architectures and show their impacts on model convergence. We randomly compromise some clients by manipulating different noise levels reflected by the local privacy budget e of differential privacy with Laplace mechanism on the local model updates of these Sybil clients. As a result, the global model convergence rates decrease or even leads to divergence. We apply our attacks to two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our evaluation results on the MNIST and CIFAR-10 datasets show that our attacks effectively slow down the convergence of the global models. We then propose a method to keep monitoring the average loss of all participants in each round for convergence anomaly detection and defend our Sybil attacks based on the training loss reported from randomly selected sets of clients as the judging panels. Our empirical study demonstrates that our defense effectively mitigates the impact of our Sybil attacks.
DOI10.1109/TrustCom53373.2021.00062
Citation Keyjiang_sybil_2021