Visible to the public Operationalizing Contextual Integrity - October 2019Conflict Detection Enabled

PI(s), Co-PI(s), Researchers: Serge Egelman, Primal Wijesekera, Alisa Frik, and Julia Bernd (ICSI); Helen Nissenbaum (Cornell Tech)

HARD PROBLEM(S) ADDRESSED
Human Behavior: We are designing human subjects studies to examine how privacy perceptions change as a function of contextual privacy norms. Our goal is to design and develop future privacy controls that have high usability because their design principles are informed by empirical research.

Metrics: We seek to build models of human behavior by studying it in both the laboratory and the field. These models will inform the design of future privacy controls.

Policy-Governed Secure Collaboration: One goal of this project is to examine how policies surrounding the acceptable use of personal data can be adapted to support the theory of contextual integrity.

Scalability and Comporsability: Ultimately, our goal is to be able to design systems that function on contextual integrity's principles, by automatically applying inferred privacy norms from one context and applying them to future contexts.

PUBLICATIONS

Madiha Tabassum, Tomasz Kosinski, Alisa Frik, Nathan Malkin, Primal Wijesekera, Serge Egelman, and Heather Lipford.2019. Investigating Users' Preferences and Expectations for Always-Listening Voice Assistants. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 2020.

Alisa Frik, Leysan Nurgalieva, Julia Bernd, Joyce Lee, Florian Schaub, and Serge Egelman. Privacy and Security Threat Models and Mitigation Strategies of Older Adults. In Proceedings of the 15th Symposium on Usable Privacy and Security (SOUPS '19), 2019, Berkeley, CA, USA.

KEY HIGHLIGHTS

In addition to the aforementioned publications and engagement events, we are currently designing new studies to better understand users' privacy perceptions surrounding in-home voice assistants, and voice capture in general. The goal is to gather data that can be used to predict privacy-sensitive events based on contextual data.

COMMUNITY ENGAGEMENTS

We organized a workshop, open to the public, to present nascent research that makes use of the Conextual Integrity framework. The "Symposium on Applications of Contextual Integrity" was hosted at Berkeley on August 19-20, and featured more than 20 different speakers. The website is here: http://privaci.info/ci_symposium/cfp.html

EDUCATIONAL ADVANCES:

We hosted a tutorial at the Symposium on Usable Privacy and Security (SOUPS), in which we explained the CI framework and showed other researchers how they can apply it in their own privacy research.