Visible to the public Biblio

Filters: Keyword is l-Diversity  [Clear All Filters]
2022-03-22
S, Muthulakshmi, R, Chitra.  2021.  Enhanced Data Privacy Algorithm to Protect the Data in Smart Grid. 2021 Smart Technologies, Communication and Robotics (STCR). :1—4.
Smart Grid is used to improve the accuracy of the grid network query. Though it gives the accuracy, it has the data privacy issues. It is a big challenge to solve the privacy issue in the smart grid. We need secured algorithms to protect the data in the smart grid, since the data is very important. This paper explains about the k-anonymous algorithm and analyzes the enhanced L-diversity algorithm for data privacy and security. The algorithm can protect the data in the smart grid is proven by the experiments.
2020-12-28
Lee, H., Cho, S., Seong, J., Lee, S., Lee, W..  2020.  De-identification and Privacy Issues on Bigdata Transformation. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp). :514—519.

As the number of data in various industries and government sectors is growing exponentially, the `7V' concept of big data aims to create a new value by indiscriminately collecting and analyzing information from various fields. At the same time as the ecosystem of the ICT industry arrives, big data utilization is treatened by the privacy attacks such as infringement due to the large amount of data. To manage and sustain the controllable privacy level, there need some recommended de-identification techniques. This paper exploits those de-identification processes and three types of commonly used privacy models. Furthermore, this paper presents use cases which can be adopted those kinds of technologies and future development directions.

2020-07-09
Kassem, Ali, Ács, Gergely, Castelluccia, Claude, Palamidessi, Catuscia.  2019.  Differential Inference Testing: A Practical Approach to Evaluate Sanitizations of Datasets. 2019 IEEE Security and Privacy Workshops (SPW). :72—79.

In order to protect individuals' privacy, data have to be "well-sanitized" before sharing them, i.e. one has to remove any personal information before sharing data. However, it is not always clear when data shall be deemed well-sanitized. In this paper, we argue that the evaluation of sanitized data should be based on whether the data allows the inference of sensitive information that is specific to an individual, instead of being centered around the concept of re-identification. We propose a framework to evaluate the effectiveness of different sanitization techniques on a given dataset by measuring how much an individual's record from the sanitized dataset influences the inference of his/her own sensitive attribute. Our intent is not to accurately predict any sensitive attribute but rather to measure the impact of a single record on the inference of sensitive information. We demonstrate our approach by sanitizing two real datasets in different privacy models and evaluate/compare each sanitized dataset in our framework.