Visible to the public Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach

TitleResisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach
Publication TypeConference Paper
Year of Publication2021
AuthorsGuo, Yifan, Wang, Qianlong, Ji, Tianxi, Wang, Xufei, Li, Pan
Conference Name2021 IEEE International Conference on Big Data (Big Data)
Date Publisheddec
KeywordsBig Data, Collaborative Work, composability, compositionality, Data Sanitization, distributed backdoor attacks, Distributed databases, dynamic norm clipping, federated learning, Human Behavior, Limiting, privacy, pubcrawl, resilience, Resiliency, Resists, Training, Training data
AbstractWith the advance in artificial intelligence and high-dimensional data analysis, federated learning (FL) has emerged to allow distributed data providers to collaboratively learn without direct access to local sensitive data. However, limiting access to individual provider's data inevitably incurs security issues. For instance, backdoor attacks, one of the most popular data poisoning attacks in FL, severely threaten the integrity and utility of the FL system. In particular, backdoor attacks launched by multiple collusive attackers, i.e., distributed backdoor attacks, can achieve high attack success rates and are hard to detect. Existing defensive approaches, like model inspection or model sanitization, often require to access a portion of local training data, which renders them inapplicable to the FL scenarios. Recently, the norm clipping approach is developed to effectively defend against distributed backdoor attacks in FL, which does not rely on local training data. However, we discover that adversaries can still bypass this defense scheme through robust training due to its unchanged norm clipping threshold. In this paper, we propose a novel defense scheme to resist distributed backdoor attacks in FL. Particularly, we first identify that the main reason for the failure of the norm clipping scheme is its fixed threshold in the training process, which cannot capture the dynamic nature of benign local updates during the global model's convergence. Motivated by it, we devise a novel defense mechanism to dynamically adjust the norm clipping threshold of local updates. Moreover, we provide the convergence analysis of our defense scheme. By evaluating it on four non-IID public datasets, we observe that our defense scheme effectively can resist distributed backdoor attacks and ensure the global model's convergence. Noticeably, our scheme reduces the attack success rates by 84.23% on average compared with existing defense schemes.
DOI10.1109/BigData52589.2021.9671910
Citation Keyguo_resisting_2021