Differential Privacy has emerged as a well-grounded approach to balancing personal privacy and societal as well as commercial use of data. The basic idea is to add random noise to analysis results sufficient to obscure the impact of any single individual's data on the analysis, thus protecting individual privacy. While general approaches to providing differential privacy exist, in many cases the bounds are not tight; more noise is added than needed. This project uses information theoretic techniques to explore the fundamental privacy/accuracy tradeoffs in differential privacy. The success of the proposed research will make progress towards a safer and more secure nation where the respect for individuals' privacy is not compromised. The proposed research is strongly integrated with an education plan that aims to develop a new graduate level course on algorithmic foundations of privacy.
This project will investigate several topics: (1) characterizing the fundamental tradeoffs between the privacy guarantee and the utility of the released data, by applying information theoretic tools and methods to identify tight bounds on achieving differential privacy; (2) designing data privatization mechanisms for individuals that achieve both computational efficiency and the optimal tradeoffs between utility and privacy; and (3) providing a privacy calculus for macroscopic analyses of complex data processing systems, consisting of various components each with its own privacy guarantees. The privacy calculus aims to provide new representations and computational tools for characterizing how privacy components interact in a large system, analogous to how network calculus allows researchers to characterize complex non-linear communication systems using familiar tools from linear systems.
|