Visible to the public Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning

TitleAdversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Publication TypeConference Paper
Year of Publication2021
AuthorsNasr, Milad, Songi, Shuang, Thakurta, Abhradeep, Papemoti, Nicolas, Carlin, Nicholas
Conference Name2021 IEEE Symposium on Security and Privacy (SP)
KeywordsAdversary Models, Deep Learning, Deep-learning, Differential privacy, Differentially-private, Differentially-private-(DP)-machine-learning, DP-SGD, Games, Human Behavior, machine-learning, Membership-Inference, Metrics, privacy, pubcrawl, Resiliency, Scalability, Toxicology, Training, Upper bound
AbstractDifferentially private (DP) machine learning allows us to train models on private data while limiting data leakage. DP formalizes this data leakage through a cryptographic game, where an adversary must predict if a model was trained on a dataset D, or a dataset D' that differs in just one example. If observing the training algorithm does not meaningfully increase the adversary's odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially private. Hence, the purpose of privacy analysis is to upper bound the probability that any adversary could successfully guess which dataset the model was trained on.In our paper, we instantiate this hypothetical adversary in order to establish lower bounds on the probability that this distinguishing game can be won. We use this adversary to evaluate the importance of the adversary capabilities allowed in the privacy analysis of DP training algorithms.For DP-SGD, the most common method for training neural networks with differential privacy, our lower bounds are tight and match the theoretical upper bound. This implies that in order to prove better upper bounds, it will be necessary to make use of additional assumptions. Fortunately, we find that our attacks are significantly weaker when additional (realistic) restrictions are put in place on the adversary's capabilities. Thus, in the practical setting common to many real-world deployments, there is a gap between our lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as suggested by the theoretical bound.
DOI10.1109/SP40001.2021.00069
Citation Keynasr_adversary_2021