Visible to the public Operationalizing Contextual Integrity - January 2019Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: Serge Egelman, Primal Wijesekera, Irwin Reyes, Julia Bernd, and Maritza Johnson (ICSI); Helen Nissenbaum (Cornell Tech)

HARD PROBLEM(S) ADDRESSED
Human Behavior: We are designing human subjects studies to examine how privacy perceptions change as a function of contextual privacy norms. Our goal is to design and develop future privacy controls that have high usability because their design principles are informed by empirical research.

Metrics: We seek to build models of human behavior by studying it in both the laboratory and the field. These models will inform the design of future privacy controls.

Policy-Governed Secure Collaboration: One goal of this project is to examine how policies surrounding the acceptable use of personal data can be adapted to support the theory of contextual integrity.

Scalability and Comporsability: Ultimately, our goal is to be able to design systems that function on contextual integrity's principles, by automatically applying inferred privacy norms from one context and applying them to future contexts.

PUBLICATIONS
Irwin Reyes, Primal Wijesekera, Joel Reardon, Amit Elazari Bar On, Abbas Razaghpanah, Narseo
Vallina-Rodriguez, and Serge Egelman. "Won't Somebody Think of the Children?" Examining COPPA Compliance at Scale. Proceedings on Privacy Enhancing Technologies (PoPETS), 2018(3):63-83.

Primal Wijesekera, Joel Reardon, Irwin Reyes, Lynn Tsai, Jung-Wei Chen, Nathan Good, David Wagner, Konstantin Beznosov, and Serge Egelman. "Contextual Permission Models for Better Privacy Protection." Symposium on Applications of Contextual Integrity, 2018.

Julia Bernd, Serge Egelman, Maritza Johnson, Nathan Malkin, Franziska Roesner, Madiha Tabassum, and Primal Wijesekera. "Studying User Expectations about Data Collection and Use by In-Home Smart Devices." Symposium on Applications of Contextual Integrity, 2018.

Nathan Malkin, Primal Wijesekera, Serge Egelman, and David Wagner. "Use Case: Passively Listening Personal Assistants." Symposium on Applications of Contextual Integrity, 2018.

KEY HIGHLIGHTS
We just deployed a study to examine contextual norms around in-home audio monitoring, which is likely to proliferate as new devices appear on the market. We are recruiting users of both the Google Home and Amazon Echo to answer questions about previously-recorded audio from their devices. Both manufacturers make audio recordings accessible to device owners through a web portal, and so our study involves using a browser extension to randomly present these clips to users, and then have them answer questions about the circumstances surrounding the recordings. We're interested in whether they were aware that the recordings were made, how sensitive the content was, as well as participants' preferences for various data retention and sharing policies.

In another set of studies, we're examining existing audio corpora, and then using crowdworkers to identify sensitive conversations, that we can then label and use to train a classifier. The goal is to design devices that can predict when they should not be recording or sharing data. We just deployed this study for several undred participants, and are now in the process of reviewing the data.

COMMUNITY ENGAGEMENTS

We also reported several security vulnerabilities to Google based on our mobile app analysis findings. Google is providing us with a bounty for one of the vulnerabilities. This vulnerability is actively being exploited by multiple ad SDKs, which we've reported to the FTC, and expect to follow up with them. Similarly, we received a bounty from Facebook for reporting massive missuse of their SDK.

EDUCATIONAL ADVANCES:

None this quarter that pertain to this specific project.