Biblio
Filters: Keyword is confidential label [Clear All Filters]
Maximal Information Leakage based Privacy Preserving Data Disclosure Mechanisms. 2019 16th Canadian Workshop on Information Theory (CWIT). :1–6.
.
2019. It is often necessary to disclose training data to the public domain, while protecting privacy of certain sensitive labels. We use information theoretic measures to develop such privacy preserving data disclosure mechanisms. Our mechanism involves perturbing the data vectors to strike a balance in the privacy-utility trade-off. We use maximal information leakage between the output data vector and the confidential label as our privacy metric. We first study the theoretical Bernoulli-Gaussian model and study the privacy-utility trade-off when only the mean of the Gaussian distributions can be perturbed. We show that the optimal solution is the same as the case when the utility is measured using probability of error at the adversary. We then consider an application of this framework to a data driven setting and provide an empirical approximation to the Sibson mutual information. By performing experiments on the MNIST and FERG data sets, we show that our proposed framework achieves equivalent or better privacy than previous methods based on mutual information.