Visible to the public Biblio

Filters: Keyword is k-anonymity  [Clear All Filters]
2018-02-06
Hassoon, I. A., Tapus, N., Jasim, A. C..  2017.  Enhance Privacy in Big Data and Cloud via Diff-Anonym Algorithm. 2017 16th RoEduNet Conference: Networking in Education and Research (RoEduNet). :1–5.

The main issue with big data in cloud is the processed or used always need to be by third party. It is very important for the owners of data or clients to trust and to have the guarantee of privacy for the information stored in cloud or analyzed as big data. The privacy models studied in previous research showed that privacy infringement for big data happened because of limitation, privacy guarantee rate or dissemination of accurate data which is obtainable in the data set. In addition, there are various privacy models. In order to determine the best and the most appropriate model to be applied in the future, which also guarantees big data privacy, it is necessary to invest in research and study. In the next part, we surfed some of the privacy models in order to determine the advantages and disadvantages of each model in privacy assurance for big data in cloud. The present study also proposes combined Diff-Anonym algorithm (K-anonymity and differential models) to provide data anonymity with guarantee to keep balance between ambiguity of private data and clarity of general data.

2017-05-18
Lin, Jerry Chun-Wei, Liu, Qiankun, Fournier-Viger, Philippe, Hong, Tzung-Pei, Zhan, Justin, Voznak, Miroslav.  2016.  An Efficient Anonymous System for Transaction Data. Proceedings of the The 3rd Multidisciplinary International Social Networks Conference on SocialInformatics 2016, Data Science 2016. :28:1–28:6.

k-anonymity is an efficient way to anonymize the relational data to protect privacy against re-identification attacks. For the purpose of k-anonymity on transaction data, each item is considered as the quasi-identifier attribute, thus increasing high dimension problem as well as the computational complexity and information loss for anonymity. In this paper, an efficient anonymity system is designed to not only anonymize transaction data with lower information loss but also reduce the computational complexity for anonymity. An extensive experiment is carried to show the efficiency of the designed approach compared to the state-of-the-art algorithms for anonymity in terms of runtime and information loss. Experimental results indicate that the proposed anonymous system outperforms the compared algorithms in all respects.

2017-03-08
Chi, H., Hu, Y. H..  2015.  Face de-identification using facial identity preserving features. 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). :586–590.

Automated human facial image de-identification is a much needed technology for privacy-preserving social media and intelligent surveillance applications. Other than the usual face blurring techniques, in this work, we propose to achieve facial anonymity by slightly modifying existing facial images into "averaged faces" so that the corresponding identities are difficult to uncover. This approach preserves the aesthesis of the facial images while achieving the goal of privacy protection. In particular, we explore a deep learning-based facial identity-preserving (FIP) features. Unlike conventional face descriptors, the FIP features can significantly reduce intra-identity variances, while maintaining inter-identity distinctions. By suppressing and tinkering FIP features, we achieve the goal of k-anonymity facial image de-identification while preserving desired utilities. Using a face database, we successfully demonstrate that the resulting "averaged faces" will still preserve the aesthesis of the original images while defying facial image identity recognition.

2015-05-04
Wiesner, K., Feld, S., Dorfmeister, F., Linnhoff-Popien, C..  2014.  Right to silence: Establishing map-based Silent Zones for participatory sensing. Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2014 IEEE Ninth International Conference on. :1-6.

Participatory sensing tries to create cost-effective, large-scale sensing systems by leveraging sensors embedded in mobile devices. One major challenge in these systems is to protect the users' privacy, since users will not contribute data if their privacy is jeopardized. Especially location data needs to be protected if it is likely to reveal information about the users' identities. A common solution is the blinding out approach that creates so-called ban zones in which location data is not published. Thereby, a user's important places, e.g., her home or workplace, can be concealed. However, ban zones of a fixed size are not able to guarantee any particular level of privacy. For instance, a ban zone that is large enough to conceal a user's home in a large city might be too small in a less populated area. For this reason, we propose an approach for dynamic map-based blinding out: The boundaries of our privacy zones, called Silent Zones, are determined in such way that at least k buildings are located within this zone. Thus, our approach adapts to the habitat density and we can guarantee k-anonymity in terms of surrounding buildings. In this paper, we present two new algorithms for creating Silent Zones and evaluate their performance. Our results show that especially in worst case scenarios, i.e., in sparsely populated areas, our approach outperforms standard ban zones and guarantees the specified privacy level.