Visible to the public Biblio

Filters: Keyword is Learning party  [Clear All Filters]
2022-12-01
Kamhoua, Georges, Bandara, Eranga, Foytik, Peter, Aggarwal, Priyanka, Shetty, Sachin.  2021.  Resilient and Verifiable Federated Learning against Byzantine Colluding Attacks. 2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :31–40.
Federated Learning (FL) is a multiparty learning computing approach that can aid privacy-preservation machine learning. However, FL has several potential security and privacy threats. First, the existing FL requires a central coordinator for the learning process which brings a single point of failure and trust issues for the shared trained model. Second, during the learning process, intentionally unreliable model updates performed by Byzantine colluding parties can lower the quality and convergence of the shared ML models. Therefore, discovering verifiable local model updates (i.e., integrity or correctness) and trusted parties in FL becomes crucial. In this paper, we propose a resilient and verifiable FL algorithm based on a reputation scheme to cope with unreliable parties. We develop a selection algorithm for task publisher and blockchain-based multiparty learning architecture approach where local model updates are securely exchanged and verified without the central party. We also proposed a novel auditing scheme to ensure our proposed approach is resilient up to 50% Byzantine colluding attack in a malicious scenario.