Visible to the public SaTC: CORE: Medium: Collaborative: Contextual Integrity: From Theory to PracticeConflict Detection Enabled

Project Details

Lead PI

Performance Period

Sep 01, 2018 - Aug 31, 2022

Institution(s)

University of California-Berkeley

Award Number


Current user-facing computer systems apply a "notice and consent" approach to managing user privacy: the user is presented with a privacy notice and then must consent to its terms. Decades of prior research show that this approach is unmanageable: policies are vague, ambiguous, and often include legal terms that make them very difficult to understand, if they are even read at all. These problems are magnified across Internet of Things (IoT) devices, which may not include displays to present privacy information, and may become so ubiquitous in the environment that users cannot possibly determine when their data is actually being captured. This project aims to solve these problems by designing new privacy management systems that automatically infer users' context-specific privacy expectations and then use them to manage the data-capture and data-sharing behaviors of mobile and IoT devices in users' environments. The goals of this research are to better understand privacy expectations, design privacy controls that require minimal user intervention, and demonstrate how emergent technologies can be designed to empower users to best manage their privacy.

The theory of "Privacy as Contextual Integrity" (CI) postulates that privacy expectations are based on contextual norms, and that privacy violations occur when data flows in ways that defy these norms. The framework can be applied by modeling data flows in terms of the data type, sender, recipient, as well as the specific context (i.e., the purpose for which data is being shared). While this model makes intuitive sense, there are several open research questions that have prevented it from being applied in computer systems. Specifically, the project investigates how privacy expectations change across varying contexts through the use of surveys, interviews, and behavioral studies, and designs systems to automatically infer contextual information so that the process of determining whether or not a data flow is likely to defy user expectations can be automated. The investigators develop a prototype of the novel privacy controls and validate their usability and privacy-preserving properties through iterative laboratory and field experiments.

Serge Egelman is Research Director of the Usable Security & Privacy Group at the International Computer Science Institute (ICSI) and also holds an appointment in the Department of Electrical Engineering and Computer Sciences (EECS) at the University of California, Berkeley. He leads the Berkeley Laboratory for Usable and Experimental Security (BLUES), which is the amalgamation of his ICSI and UCB research groups. Serge's research focuses on the intersection of privacy, computer security, and human-computer interaction, with the specific aim of better understanding how people make decisions surrounding their privacy and security, and then creating data-driven improvements to systems and interfaces. This has included human subjects research on social networking privacy, access controls, authentication mechanisms, web browser security warnings, and privacy-enhancing technologies. His work has received multiple best paper awards, including seven ACM CHI Honorable Mentions, the 2012 Symposium on Usable Privacy and Security (SOUPS) Distinguished Paper Award for his work on smartphone application permissions, as well as the 2017 SOUPS Impact Award, and the 2012 Information Systems Research Best Published Paper Award for his work on consumers' willingness to pay for online privacy. He received his PhD from Carnegie Mellon University and prior to that was an undergraduate at the University of Virginia. He has also performed research at NIST, Brown University, Microsoft Research, and Xerox PARC.