Securing Safety-Critical Machine Learning Algorithms - April 2021
PI(s), Co-PI(s), Researchers: Lujo Bauer, Matt Fredrikson (CMU), Mike Reiter (UNC)
HARD PROBLEM(S) ADDRESSED
This project addresses the following hard problems: developing security metrics and developing resilient architectures. Both problems are tackled in the context of deep neural networks, which are a particularly popular and performant type of machine learning algorithm. This project develops metrics that characterize the degree to which a neural-network-based classifier can be evaded through practically realizable, inconspicuous attacks. The project also develops architectures for neural networks that would make them robust to adversarial examples.
PUBLICATIONS
Keane Lucas, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, Saurabh Shintre. Malware Makeover: Breaking ML-based Static Analysis by Modifying Executable Bytes. In Proc. AsiaCCS, June 2021. To appear.
PUBLIC ACCOMPLISHMENT HIGHLIGHTS
Our upcoming AsiaCCS paper extends our previous arXiv preprint with some new results: we had previously shown that malware binaries could often be transformed so that they evaded correct classification by anti-virus programs (i.e., they would be incorrectly classified as benign). Leveraging an expanded experimental infrastructure, we more recently showed that such attacks can ultimately succeed even when attempting to transform binaries that initially appear resistant to attack. Specifically, we recognized that previous attacks are sufficiently stochastic that even when they usually fail, a determined adversary who attempts enough attacks will eventually succeed with high probability.
COMMUNITY ENGAGEMENTS (If applicable)
N/A this quarter
EDUCATIONAL ADVANCES (If applicable)
N/A this quarter