Visible to the public Biblio

Filters: Keyword is re-identification risk  [Clear All Filters]
2019-10-23
Karmaker Santu, Shubhra Kanti, Bindschadler, Vincent, Zhai, ChengXiang, Gunter, Carl A..  2018.  NRF: A Naive Re-Identification Framework. Proceedings of the 2018 Workshop on Privacy in the Electronic Society. :121-132.

The promise of big data relies on the release and aggregation of data sets. When these data sets contain sensitive information about individuals, it has been scalable and convenient to protect the privacy of these individuals by de-identification. However, studies show that the combination of de-identified data sets with other data sets risks re-identification of some records. Some studies have shown how to measure this risk in specific contexts where certain types of public data sets (such as voter roles) are assumed to be available to attackers. To the extent that it can be accomplished, such analyses enable the threat of compromises to be balanced against the benefits of sharing data. For example, a study that might save lives by enabling medical research may be enabled in light of a sufficiently low probability of compromise from sharing de-identified data. In this paper, we introduce a general probabilistic re-identification framework that can be instantiated in specific contexts to estimate the probability of compromises based on explicit assumptions. We further propose a baseline of such assumptions that enable a first-cut estimate of risk for practical case studies. We refer to the framework with these assumptions as the Naive Re-identification Framework (NRF). As a case study, we show how we can apply NRF to analyze and quantify the risk of re-identification arising from releasing de-identified medical data in the context of publicly-available social media data. The results of this case study show that NRF can be used to obtain meaningful quantification of the re-identification risk, compare the risk of different social media, and assess risks of combinations of various demographic attributes and medical conditions that individuals may voluntarily disclose on social media.

2018-01-10
Zaman, A. N. K., Obimbo, C., Dara, R. A..  2017.  An improved differential privacy algorithm to protect re-identification of data. 2017 IEEE Canada International Humanitarian Technology Conference (IHTC). :133–138.

In the present time, there has been a huge increase in large data repositories by corporations, governments, and healthcare organizations. These repositories provide opportunities to design/improve decision-making systems by mining trends and patterns from the data set (that can provide credible information) to improve customer service (e.g., in healthcare). As a result, while data sharing is essential, it is an obligation to maintaining the privacy of the data donors as data custodians have legal and ethical responsibilities to secure confidentiality. This research proposes a 2-layer privacy preserving (2-LPP) data sanitization algorithm that satisfies ε-differential privacy for publishing sanitized data. The proposed algorithm also reduces the re-identification risk of the sanitized data. The proposed algorithm has been implemented, and tested with two different data sets. Compared to other existing works, the results obtained from the proposed algorithm show promising performance.