Visible to the public Biblio

Filters: Keyword is Privacy Preferences  [Clear All Filters]
2020-04-20
Esquivel-Quiros, Luis Gustavo, Barrantes, Elena Gabriela, Darlington, Fernando Esponda.  2018.  Measuring data privacy preserving and machine learning. 2018 7th International Conference On Software Process Improvement (CIMPS). :85–94.

The increasing publication of large amounts of data, theoretically anonymous, can lead to a number of attacks on the privacy of people. The publication of sensitive data without exposing the data owners is generally not part of the software developers concerns. The regulations for the data privacy-preserving create an appropriate scenario to focus on privacy from the perspective of the use or data exploration that takes place in an organization. The increasing number of sanctions for privacy violations motivates the systematic comparison of three known machine learning algorithms in order to measure the usefulness of the data privacy preserving. The scope of the evaluation is extended by comparing them with a known privacy preservation metric. Different parameter scenarios and privacy levels are used. The use of publicly available implementations, the presentation of the methodology, explanation of the experiments and the analysis allow providing a framework of work on the problem of the preservation of privacy. Problems are shown in the measurement of the usefulness of the data and its relationship with the privacy preserving. The findings motivate the need to create optimized metrics on the privacy preferences of the owners of the data since the risks of predicting sensitive attributes by means of machine learning techniques are not usually eliminated. In addition, it is shown that there may be a hundred percent, but it cannot be measured. As well as ensuring adequate performance of machine learning models that are of interest to the organization that data publisher.

2020-04-03
Alom, Md. Zulfikar, Carminati, Barbara, Ferrari, Elena.  2019.  Adapting Users' Privacy Preferences in Smart Environments. 2019 IEEE International Congress on Internet of Things (ICIOT). :165—172.
A smart environment is a physical space where devices are connected to provide continuous support to individuals and make their life more comfortable. For this purpose, a smart environment collects, stores, and processes a massive amount of personal data. In general, service providers collect these data according to their privacy policies. To enhance the privacy control, individuals can explicitly express their privacy preferences, stating conditions on how their data have to be used and managed. Typically, privacy checking is handled through the hard matching of users' privacy preferences against service providers' privacy policies, by denying all service requests whose privacy policies do not fully match with individual's privacy preferences. However, this hard matching might be too restrictive in a smart environment because it denies the services that partially satisfy the individual's privacy preferences. To cope with this challenge, in this paper, we propose a soft privacy matching mechanism, able to relax, in a controlled way, some conditions of users' privacy preferences such to match with service providers' privacy policies. At this aim, we exploit machine learning algorithms to build a classifier, which is able to make decisions on future service requests, by learning which privacy preference components a user is prone to relax, as well as the relaxation tolerance. We test our approach on two realistic datasets, obtaining promising results.