Visible to the public Biblio

Filters: Keyword is re-identification  [Clear All Filters]
2018-01-23
Joo, Moon-Ho, Yoon, Sang-Pil, Kim, Sahng-Yoon, Kwon, Hun-Yeong.  2017.  Research on Distribution of Responsibility for De-Identification Policy of Personal Information. Proceedings of the 18th Annual International Conference on Digital Government Research. :74–83.
With the coming of the age of big data, efforts to institutionalize de-identification of personal information to protect privacy but also at the same time, to allow the use of personal information, have been actively carried out and already, many countries are in the stage of implementing and establishing de-identification policies quite actively. But even with such efforts to protect and use personal information at the same time, the danger posed by re-identification based on de-identified information is real enough to warrant serious consideration for a management mechanism of such risks as well as a mechanism for distributing the responsibilities and liabilities that follow these risks in the event of accidents and incidents involving the invasion of privacy. So far, most countries implementing the de-identification policies are focusing on defining what de-identification is and the exemption requirements to allow free use of de-identified personal information; in fact, it seems that there is a lack of discussion and consideration on how to distribute the responsibility of the risks and liabilities involved in the process of de-identification of personal information. This study proposes to take a look at the various de-identification policies worldwide and contemplate on these policies in the perspective of risk-liability theory. Also, the constituencies of the de-identification policies will be identified in order to analyze the roles and responsibilities of each of these constituencies thereby providing the theoretical basis on which to initiate the discussions on the distribution of burden and responsibilities arising from the de-identification policies.
2015-11-12
Xia, Weiyi, Kantarcioglu, Murat, Wan, Zhiyu, Heatherly, Raymond, Vorobeychik, Yevgeniy, Malin, Bradley.  2015.  Process-Driven Data Privacy. Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. :1021–1030.

The quantity of personal data gathered by service providers via our daily activities continues to grow at a rapid pace. The sharing, and the subsequent analysis of, such data can support a wide range of activities, but concerns around privacy often prompt an organization to transform the data to meet certain protection models (e.g., k-anonymity or E-differential privacy). These models, however, are based on simplistic adversarial frameworks, which can lead to both under- and over-protection. For instance, such models often assume that an adversary attacks a protected record exactly once. We introduce a principled approach to explicitly model the attack process as a series of steps. Specically, we engineer a factored Markov decision process (FMDP) to optimally plan an attack from the adversary's perspective and assess the privacy risk accordingly. The FMDP captures the uncertainty in the adversary's belief (e.g., the number of identied individuals that match the de-identified data) and enables the analysis of various real world deterrence mechanisms beyond a traditional protection model, such as a penalty for committing an attack. We present an algorithm to solve the FMDP and illustrate its efficiency by simulating an attack on publicly accessible U.S. census records against a real identied resource of over 500,000 individuals in a voter registry. Our results demonstrate that while traditional privacy models commonly expect an adversary to attack exactly once per record, an optimal attack in our model may involve exploiting none, one, or more indiviuals in the pool of candidates, depending on context.