Visible to the public Biblio

Filters: Keyword is differential privacy mechanism  [Clear All Filters]
2020-08-13
Razaque, Abdul, Frej, Mohamed Ben Haj, Yiming, Huang, Shilin, Yan.  2019.  Analytical Evaluation of k–Anonymity Algorithm and Epsilon-Differential Privacy Mechanism in Cloud Computing Environment. 2019 IEEE Cloud Summit. :103—109.

Expected and unexpected risks in cloud computing, which included data security, data segregation, and the lack of control and knowledge, have led to some dilemmas in several fields. Among all of these dilemmas, the privacy problem is even more paramount, which has largely constrained the prevalence and development of cloud computing. There are several privacy protection algorithms proposed nowadays, which generally include two categories, Anonymity algorithm, and differential privacy mechanism. Since many types of research have already focused on the efficiency of the algorithms, few of them emphasized the different orientation and demerits between the two algorithms. Motivated by this emerging research challenge, we have conducted a comprehensive survey on the two popular privacy protection algorithms, namely K-Anonymity Algorithm and Differential Privacy Algorithm. Based on their principles, implementations, and algorithm orientations, we have done the evaluations of these two algorithms. Several expectations and comparisons are also conducted based on the current cloud computing privacy environment and its future requirements.

2020-08-07
Zhu, Tianqing, Yu, Philip S..  2019.  Applying Differential Privacy Mechanism in Artificial Intelligence. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1601—1609.
Artificial Intelligence (AI) has attracted a large amount of attention in recent years. However, several new problems, such as privacy violations, security issues, or effectiveness, have been emerging. Differential privacy has several attractive properties that make it quite valuable for AI, such as privacy preservation, security, randomization, composition, and stability. Therefore, this paper presents differential privacy mechanisms for multi-agent systems, reinforcement learning, and knowledge transfer based on those properties, which proves that current AI can benefit from differential privacy mechanisms. In addition, the previous usage of differential privacy mechanisms in private machine learning, distributed machine learning, and fairness in models is discussed, bringing several possible avenues to use differential privacy mechanisms in AI. The purpose of this paper is to deliver the initial idea of how to integrate AI with differential privacy mechanisms and to explore more possibilities to improve AIs performance.