Visible to the public Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text

TitleLeveraging Hierarchical Representations for Preserving Privacy and Utility in Text
Publication TypeConference Paper
Year of Publication2019
AuthorsFeyisetan, Oluwaseyi, Diethe, Tom, Drake, Thomas
Conference Name2019 IEEE International Conference on Data Mining (ICDM)
Date PublishedNov. 2019
PublisherIEEE
ISBN Number978-1-7281-4604-1
Keywordsarbitrary piece, compositionality, data deletion, data privacy, Data Sanitization, Differential privacy, document redaction, downstream machine learning models, expected privacy, hierarchical representations, high dimensional hyperbolic space, Human Behavior, human factors, learning (artificial intelligence), nonHamming distance metrics, privacy, privacy analysis, privacy experiments, probability, proof satisfying dx-privacy, pubcrawl, resilience, Resiliency, Scalability, semantic generalization, supporting data driven decisions, text analysis, training machine learning models, user privacy, utility experiments highlight, vast data stores, word representations
Abstract

Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe \textbackslashtextgreater 20x greater guarantees on expected privacy against comparable worst case statistics.

URLhttps://ieeexplore.ieee.org/document/8970912
DOI10.1109/ICDM.2019.00031
Citation Keyfeyisetan_leveraging_2019