Visible to the public Securing Safety-Critical Machine Learning Algorithms - July 2019Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: Lujo Bauer, Matt Fredrikson (CMU), Mike Reiter (UNC)

HARD PROBLEM(S) ADDRESSED

This project addresses the following hard problems: developing security metrics and developing resilient architectures. Both problems are tackled in the context of deep neural networks, which are a particularly popular and performant type of machine learning algorithm. This project develops metrics that characterize the degree to which a neural-network-based classifier can be evaded through practically realizable, inconspicuous attacks. The project also develops architectures for neural networks that would make them robust to adversarial examples.

PUBLICATIONS

Our previous work led to the development of a general framework for creating evasion attacks against ML classifiers. A key aspect of this work was that the framework supports creating attacks that are subject to multiple constraints, including ones that cannot be formally specified. These results appeared in ACM TOPS:

  • Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. A general framework for adversarial examples with objectives. ACM Transactions on Privacy and Security, 22(3), June 2019.

PUBLIC ACCOMPLISHMENT HIGHLIGHTS

N/A this quarter

COMMUNITY ENGAGEMENTS (If applicable)

Bauer presented work supported by this award at the Summer school on real-world crypto and privacy in Sibenik, Croatia, attended by over 100 PhD students.

EDUCATIONAL ADVANCES (If applicable)

N/A this quarter