Machine learning algorithms are increasingly part of everyday life: they help power the ads that we see while browsing the web, self-driving aids in modern cars, and even weather prediction and critical infrastructure. We rely on these algorithms in part because they perform better than alternatives and they can be easy to customize to new applications. Many machine learning algorithms also have a big weakness: it is difficult to understand how and why they compute the answers they provide. This opaqueness means that the answers we get from a machine learning algorithm could be subtly biased or even completely wrong, and yet we might not realize it. This project's goal is to make machine learning algorithms easier to understand, as well as to leverage some of the techniques used by attackers to trick machine learning algorithms into making mistakes to build computer systems that are more resistant to attack. In addition to making fundamental contributions to how machine learning algorithms are designed and used, the project includes outreach efforts that will entice students to gain hands-on experience with machine learning tools.
This project focuses on deep neural networks (DNNs). A groundswell of research within the past five years has demonstrated the propensity of these models to being evaded by inputs created to fool them -- so called "adversarial examples." These types of attacks leverage DNNs' opacity: while DNNs can perform remarkably well on some classification tasks, they often defy simple explanations of how they do so, and indeed can leverage features for doing so that humans might find surprising. This project leverages DNNs and the attacks against them to gain insights into how to build more resilient computer systems. Specifically, the project will use DNNs to model adversaries trying to attack computer systems and then "attack" these DNNs to learn how to improve these systems' resilience to attack. This modeling will be done using Generative Adversarial Nets (GANs), in which "generator" and "discriminator" models compete. Central to this vision are the abilities to evade DNNs under constraints and to extract explanations from them about how they perform classification. Consequently, this project will make fundamental advances both in developing better methods to deceive DNNs and in improving this important machine-learning tool.
Lujo Bauer is an Associate Professor in the Electrical and Computer Engineering Department and in the Institute for Software Research at Carnegie Mellon University. He received his B.S. in Computer Science from Yale University in 1997 and his Ph.D., also in Computer Science, from Princeton University in 2003.
Dr. Bauer's research interests span many areas of computer security and privacy, and include building usable access-control systems with sound theoretical underpinnings, developing languages and systems for run-time enforcement of security policies on programs, and generally narrowing the gap between a formal model and a practical, usable system. His recent work focuses on developing tools and guidance to help users stay safer online and in examining how advances in machine learning can lead to a more secure future.
Dr. Bauer served as the program chair for the flagship computer security conferences of the IEEE (S&P 2015) and the Internet Society (NDSS 2014) and is an associate editor of ACM Transactions on Information and System Security.
|