Operationalizing Contextual Integrity - October 2018
PI(s), Co-PI(s), Researchers: Serge Egelman, Primal Wijesekera, Irwin Reyes, Julia Bernd, and Maritza Johnson (ICSI); Helen Nissenbaum (Cornell Tech)
HARD PROBLEM(S) ADDRESSED
Human Behavior: We are designing human subjects studies to examine how privacy perceptions change as a function of contextual privacy norms. Our goal is to design and develop future privacy controls that have high usability because their design principles are informed by empirical research.
Metrics: We seek to build models of human behavior by studying it in both the laboratory and the field. These models will inform the design of future privacy controls.
Policy-Governed Secure Collaboration: One goal of this project is to examine how policies surrounding the acceptable use of personal data can be adapted to support the theory of contextual integrity.
Scalability and Comporsability: Ultimately, our goal is to be able to design systems that function on contextual integrity's principles, by automatically applying inferred privacy norms from one context and applying them to future contexts.
PUBLICATIONS
Irwin Reyes, Primal Wijesekera, Joel Reardon, Amit Elazari Bar On, Abbas Razaghpanah, Narseo
Vallina-Rodriguez, and Serge Egelman. "Won't Somebody Think of the Children?" Examining COPPA Compliance at Scale. Proceedings on Privacy Enhancing Technologies (PoPETS), 2018(3):63-83.
Primal Wijesekera, Joel Reardon, Irwin Reyes, Lynn Tsai, Jung-Wei Chen, Nathan Good, David Wagner, Konstantin Beznosov, and Serge Egelman. "Contextual Permission Models for Better Privacy Protection." Symposium on Applications of Contextual Integrity, 2018.
Julia Bernd, Serge Egelman, Maritza Johnson, Nathan Malkin, Franziska Roesner, Madiha Tabassum, and Primal Wijesekera. "Studying User Expectations about Data Collection and Use by In-Home Smart Devices." Symposium on Applications of Contextual Integrity, 2018.
Nathan Malkin, Primal Wijesekera, Serge Egelman, and David Wagner. "Use Case: Passively Listening Personal Assistants." Symposium on Applications of Contextual Integrity, 2018.
KEY HIGHLIGHTS
The main efforts this quarter have been on designing new studies to explore contextual integrity with regard to in-home smart devices. In one study, we're examining contextual norms around in-home audio monitoring, which is likely to proliferate. As a first cut at the problem, we're performing a study that involves users of either the Google Home or Amazon Echo answering questions about previously-recorded audio from their devices. Both manufacturers make audio recordings accessible to device owners through a web portal, and so our study involves using a browser extension to randomly present these clips to users, and then have them answer questions about the circumstances surrounding the recordings. We're interested in whether they were aware that the recordings were made, how sensitive the content was, as well as participants' preferences for various data retention and sharing policies.
In another set of studies, we're examining existing audio corpora, and then using crowdworkers to identify sensitive conversations, that we can then label and use to train a classifier. The goal is to design devices that can predict when they should not be recording or sharing data.
Finally, in the mobile space, we're looking at disclosure of data-sharing practices in Android app privacy policies. Since GDPR and the soon-to-be-enacted CA privacy law require disclosing data recipients (or categories of data recipients), we want to examine compliance and whether we can detect violations. Using our existing testbed and data, we know the third parties who receive data (ground truth). The question is, are these practices adequately disclosed? To examine this, we've designed a crowdsourcing task to label policies at scale. Using a test corpus of 100 policies, we've found very high inter-rater reliability, and so this method appears to be promising. Once we have labeled policies, we're also going to explore the idea of using this to train a classifier, to determine whether we can automatically extract named entities from policies to compare with our observations of data flows.
COMMUNITY ENGAGEMENTS
We presented three papers at the Symposium on Applications of Contextual Integrity this quarter. These all generated a lot of discussion, and fostered some potential collaborations.
We also reported several security vulnerabilities to Google based on our mobile app analysis findings. Google is providing us with a bounty for one of the vulnerabilities. This vulnerability is actively being exploited by multiple ad SDKs, which we've reported to the FTC, and expect to follow up with them.
Finally, our PETS paper has generated interest from regulators. We reported several apps that appeared to be violating COPPA to Google, which decided to ignore our report. As a result, they're now being sued by a state AG (alongside the app developers), and have backpeddled and claim to now be taking action based on our reports. So far, this has resulted in the removal of hundreds of child-directed apps from the Play Store.
EDUCATIONAL ADVANCES:
None this quarter that pertain to this specific project.