Visible to the public Securing Safety-Critical Machine Learning Algorithms - October 2022Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: Lujo Bauer, Matt Fredrikson (CMU), Mike Reiter (UNC)

HARD PROBLEM(S) ADDRESSED

This project addresses the following hard problems: developing security metrics and developing resilient architectures. Both problems are tackled in the context of deep neural networks, which are a particularly popular and performant type of machine learning algorithm. This project develops metrics that characterize the degree to which a neural-network-based classifier can be evaded through practically realizable, inconspicuous attacks. The project also develops architectures for neural networks that would make them robust to adversarial examples.

PUBLICATIONS

Weiran Lin, Keane Lucas, Lujo Bauer, Michael K. Reiter, Mahmood Sharif. Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks. ICML 2022.

PUBLIC ACCOMPLISHMENT HIGHLIGHTS

We demonstrated (and described in a paper that was published at ICML 2022), a new loss function and a new attack method for creating adversarial examples. Both the new loss function and the new attack method attempt to better reflect the attacker's goals in finding adversarial examples: in particular, while current attacks typically clip intermediate perturbations to force attacks to stay within some Lp-norm distance of the original input, our attack method allows the search for adversarial examples to temporarily explore regions beyond the eventual epsilon boundary. We demonstrate experimentally that our new attack method, as well as our new loss function if used within previous-best attacks, finds more adversarial examples than previous-best attacks.

COMMUNITY ENGAGEMENTS (If applicable)

No new data

EDUCATIONAL ADVANCES (If applicable)

N/A this quarter