Visible to the public Applying Differential Privacy Mechanism in Artificial Intelligence

TitleApplying Differential Privacy Mechanism in Artificial Intelligence
Publication TypeConference Paper
Year of Publication2019
AuthorsZhu, Tianqing, Yu, Philip S.
Conference Name2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS)
Date Publishedjul
KeywordsAI, artificial intelligence, data privacy, Differential privacy, differential privacy mechanism, distributed machine learning, federated learning, Human Behavior, human factors, learning (artificial intelligence), machine learning, multi-agent system, multi-agent systems, multiagent systems, privacy, pubcrawl, reinforcement learning, resilience, Resiliency, Scalability, security
AbstractArtificial Intelligence (AI) has attracted a large amount of attention in recent years. However, several new problems, such as privacy violations, security issues, or effectiveness, have been emerging. Differential privacy has several attractive properties that make it quite valuable for AI, such as privacy preservation, security, randomization, composition, and stability. Therefore, this paper presents differential privacy mechanisms for multi-agent systems, reinforcement learning, and knowledge transfer based on those properties, which proves that current AI can benefit from differential privacy mechanisms. In addition, the previous usage of differential privacy mechanisms in private machine learning, distributed machine learning, and fairness in models is discussed, bringing several possible avenues to use differential privacy mechanisms in AI. The purpose of this paper is to deliver the initial idea of how to integrate AI with differential privacy mechanisms and to explore more possibilities to improve AIs performance.
DOI10.1109/ICDCS.2019.00159
Citation Keyzhu_applying_2019