Visible to the public Model Fragmentation, Shuffle and Aggregation to Mitigate Model Inversion in Federated Learning

TitleModel Fragmentation, Shuffle and Aggregation to Mitigate Model Inversion in Federated Learning
Publication TypeConference Paper
Year of Publication2021
AuthorsMasuda, Hiroki, Kita, Kentaro, Koizumi, Yuki, Takemasa, Junji, Hasegawa, Toru
Conference Name2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)
Date Publishedjul
KeywordsAdversary Models, Aggregates, Collaborative Work, Differential privacy, federated learning, Human Behavior, Learning systems, Metrics, Metropolitan area networks, Model inversion, privacy, pubcrawl, Resiliency, Resistance, Scalability, Training data
AbstractFederated learning is a privacy-preserving learning system where participants locally update a shared model with their own training data. Despite the advantage that training data are not sent to a server, there is still a risk that a state-of-the-art model inversion attack, which may be conducted by the server, infers training data from the models updated by the participants, referred to as individual models. A solution to prevent such attacks is differential privacy, where each participant adds noise to the individual model before sending it to the server. Differential privacy, however, sacrifices the quality of the shared model in compensation for the fact that participants' training data are not leaked. This paper proposes a federated learning system that is resistant to model inversion attacks without sacrificing the quality of the shared model. The core idea is that each participant divides the individual model into model fragments, shuffles, and aggregates them to prevent adversaries from inferring training data. The other benefit of the proposed system is that the resulting shared model is identical to the shared model generated with the naive federated learning.
DOI10.1109/LANMAN52105.2021.9478813
Citation Keymasuda_model_2021