Surveillance and tracking on the Internet are growing more pervasive and threaten privacy and freedom of expression. The Tor anonymity system protects the privacy of millions of users, including ordinary citizens, journalists, whistle-blowers, military intelligence, police, businesses, and people living under censorship and surveillance. Unfortunately, Tor is vulnerable to website fingerprinting (WF) attacks in which an eavesdropper uses a machine learning (ML) classifier to identify which website the user is visiting from its traffic patterns. The research team's state-of-the-art WF attack using a deep learning classifier reaches 98% accuracy, which is deeply concerning to Tor and its users. The goal of this project is to explore the new landscape of WF attacks and defenses in light of the team's findings with deep learning. A key aspect of the work is to build upon recent advances in fooling deep learning classifiers and apply these new findings to the context of anonymity systems. Based on this focus on adversarial machine learning, the project will create a new course and an accessible summer camp module on the topic, as well as launch a podcast on Cybersecurity Research featuring interviews with top researchers in the fields of adversarial machine learning and anonymity.
The research has three thrusts. First, the team is exploring the impact that these attacks can have for Tor users by addressing how the attacks can generalize to different network conditions and Tor versions, how they can be better adapted to realistic settings, and how they are impacted by real-world user behaviors in Tor. Second, since recent work has shown that it is possible to reliably fool deep learning classifiers, the team is studying how to adapt these techniques for robust and efficient defense. Prior work has primarily been in the image classification domain, whereas network traffic is more challenging to manipulate, so the team is designing new methods that account for this difference. In the third thrust, recognizing that researchers are actively seeking robust classifiers that are harder to fool, the team aims to understand new ways to build robust classifiers and explore their properties. While this aspect of the project means potentially finding stronger WF attacks against Tor, robust classifiers would be helpful for the myriad applications of deep learning, such as self-driving cars, stylometry, malware detection, processing drone and satellite imagery, and more.
|