Visible to the public CRII: SaTC: PrivateNet - Preserving Differential Privacy in Deep Learning under Model AttacksConflict Detection Enabled

Project Details

Lead PI

Performance Period

Feb 15, 2019 - Jan 21, 2021

Institution(s)

New Jersey Institute of Technology

Award Number


The rapid development of machine learning in the domain of healthcare presents clear privacy issues, when deep neural networks and other models are built based on patients' personal and highly sensitive data such as clinical records or tracked health data. Further, these models can be vulnerable to attackers trying to infer the sensitive data that was used to build the model. This raises important research questions about how to develop machine learning models that protect private data against inference attacks while still being accurate and useful predictive models, as well as important practical considerations about how these risks to patient data may expose health care providers to legal action based on HIPAA and related regulations. To address these questions, this project will develop a framework, called PrivateNet, for privacy preservation in deep neural networks under model attacks to offer strong privacy protections for data used in deep learning. PrivateNet will be developed on top of commonly used machine learning frameworks, providing ways for the project's findings to have impact in both industry and educational contexts.

A key thrust of the project is to better understand and defend against model inference attacks, including both well-known fundamental model attacks and novel attacks developed through prism of the classical confidentiality and integrity models. Through an extensive analysis of these attacks, the team will develop an understanding of the relative risks of key aspects of learning approaches. In particular, vulnerable features, parameters, and correlations, which are essential to conduct model attacks, will be automatically identified and protected in a novel threat-aware privacy preserving approach based on ideas from differential privacy. Specifically, the team will develop adaptive privacy preserving mechanisms that distribute noise across the most vulnerable aspects of the learning process to provide strong differential privacy protections in deep learning models while maintaining high model utility. The project is expected to lay a foundation of key privacy-preserving techniques to protect users' personal and highly sensitive data in deep learning under model attacks.