Visible to the public Certified Robustness to Adversarial Examples with Differential Privacy

TitleCertified Robustness to Adversarial Examples with Differential Privacy
Publication TypeConference Paper
Year of Publication2019
AuthorsLecuyer, Mathias, Atlidakis, Vaggelis, Geambasu, Roxana, Hsu, Daniel, Jana, Suman
Conference Name2019 IEEE Symposium on Security and Privacy (SP)
Keywordsadversarial examples, Adversarial-Examples, certified defense, certified robustness, cryptographically-inspired privacy formalism, cryptography, data privacy, Databases, deep neural networks, Deep-learning, defense, Differential privacy, Google Inception network, ImageNet, learning (artificial intelligence), machine learning models, machine-learning, Mathematical model, Measurement, Metrics, neural nets, norm-bounded attacks, PixelDP, Predictive models, privacy models and measurement, pubcrawl, Robustness, security, Sophisticated Attacks, Standards
AbstractAdversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
DOI10.1109/SP.2019.00044
Citation Keylecuyer_certified_2019