Visible to the public FairFed: Cross-Device Fair Federated Learning

TitleFairFed: Cross-Device Fair Federated Learning
Publication TypeConference Paper
Year of Publication2020
AuthorsHabib ur Rehman, Muhammad, Mukhtar Dirir, Ahmed, Salah, Khaled, Svetinovic, Davor
Conference Name2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)
Date Publishedoct
KeywordsAI Poisoning, data quality, Deep Learning, Differential privacy, fairness, federated learning, Human Behavior, machine learning, Model Development, Neural networks, Outlier detection, performance evaluation, Protocols, pubcrawl, Resiliency, Scalability, Sociology, Training
AbstractFederated learning (FL) is the rapidly developing machine learning technique that is used to perform collaborative model training over decentralized datasets. FL enables privacy-preserving model development whereby the datasets are scattered over a large set of data producers (i.e., devices and/or systems). These data producers train the learning models, encapsulate the model updates with differential privacy techniques, and share them to centralized systems for global aggregation. However, these centralized models are always prone to adversarial attacks (such as data-poisoning and model poisoning attacks) due to a large number of data producers. Hence, FL methods need to ensure fairness and high-quality model availability across all the participants in the underlying AI systems. In this paper, we propose a novel FL framework, called FairFed, to meet fairness and high-quality data requirements. The FairFed provides a fairness mechanism to detect adversaries across the devices and datasets in the FL network and reject their model updates. We use a Python-simulated FL framework to enable large-scale training over MNIST dataset. We simulate a cross-device model training settings to detect adversaries in the training network. We used TensorFlow Federated and Python to implement the fairness protocol, the deep neural network, and the outlier detection algorithm. We thoroughly test the proposed FairFed framework with random and uniform data distributions across the training network and compare our initial results with the baseline fairness scheme. Our proposed work shows promising results in terms of model accuracy and loss.
DOI10.1109/AIPR50011.2020.9425266
Citation Keyhabib_ur_rehman_fairfed_2020