Title | Secure Federated Averaging Algorithm with Differential Privacy |
Publication Type | Conference Paper |
Year of Publication | 2020 |
Authors | Li, Y., Chang, T.-H., Chi, C.-Y. |
Conference Name | 2020 IEEE 30th International Workshop on Machine Learning for Signal Processing (MLSP) |
Keywords | algorithm communication efficiency, Analytical models, client sensitive information, client-server systems, composability, convergence, convergence analysis, convergence rate, Data models, data privacy, differential attacks, Differential privacy, distributed machine learning, federated learning, federating averaging algorithm, Gaussian noise, gradient methods, Human Behavior, learning (artificial intelligence), local model parameters, message exchange, message obfuscation, Model averaging, Prediction algorithms, privacy, pubcrawl, Resiliency, Scalability, secure FedAvg algorithm, secure federated averaging algorithm, security of data, Servers, stochastic gradient descent, Stochastic processes |
Abstract | Federated learning (FL), as a recent advance of distributed machine learning, is capable of learning a model over the network without directly accessing the client's raw data. Nevertheless, the clients' sensitive information can still be exposed to adversaries via differential attacks on messages exchanged between the parameter server and clients. In this paper, we consider the widely used federating averaging (FedAvg) algorithm and propose to enhance the data privacy by the differential privacy (DP) technique, which obfuscates the exchanged messages by properly adding Gaussian noise. We analytically show that the proposed secure FedAvg algorithm maintains an O(l/T) convergence rate, where T is the total number of stochastic gradient descent (SGD) updates for local model parameters. Moreover, we demonstrate how various algorithm parameters can impact on the algorithm communication efficiency. Experiment results are presented to justify the obtained analytical results on the performance of the proposed algorithm in terms of testing accuracy. |
DOI | 10.1109/MLSP49062.2020.9231531 |
Citation Key | li_secure_2020 |