Visible to the public Operationalizing Contextual Integrity - July 2021Conflict Detection Enabled

PI(s), Co-PI(s), Researchers:

  • Serge Egelman (ICSI)
  • Primal Wijesekera (ICSI)
  • Nathan Malkin (UCB)
  • Julia Bernd (ICSI)
  • Helen Nissenbaum (Cornell Tech)

HARD PROBLEM(S) ADDRESSED
Human Behavior, Metrics, Policy-Governed Secure Collaboration, and Scalability and Comporsability.

PUBLICATIONS

  • Our IMWUT (ne Ubicomp) received a "major revisions," and so was resubmitted with changes.
  • We are working on submitting papers to CHI and PETS next quarter (as noted below).

KEY HIGHLIGHTS

  • We performed a study that collected people's perceptions of passive listening, their privacy preferences for it, their reactions to different modalities of permission requests, and their suggestions for other privacy controls. Based on our results, we created a set of recommendations for how users should be presented with privacy decisions for these and other future in-home data capture devices.

    To find out how people would react to different kinds of runtime permission requests, we are asking participants in this study to hold conversations while getting ambient suggestions (and plenty of permission requests) from a passive listening assistant, which we will simulate in real time using the Wizard of Oz technique. We are examining different permission systems, such as asking every time, asking on first use, and using machine learning. Setting aside the practicality of each approach, our goal in this study is to understand each of these permission designs from the perspective of a user. What is the user experience of each permission approach? What are their relative advantages and disadvantages? Which are most likely to be acceptable for day-to-day use and which enable people's trust? This study is dedicated to answering these questions. Most of our participants seem to be excited about passive listening, but want control over the assistant's actions and their own data. They generally seem to prioritize an interruption-free experience above more fine-grained control over what the device is allowed to record. We are planning to write this up for PETS next quarter.
  • Our study of passive-listening devices used an interactive app store experience that provided a unique means of measuring consumer sentiment in a scenario modeling real life. Using both quantitative and qualitative analysis, we determined people's views on privacy models for always-listening voice assistants, which generally ranged from an outright rejection of the voice assistant described in our survey to preferring one model for its increased privacy protections. Only three participants (1.4%) responded that they believed there were sufficient privacy protections in place for both models they were assigned to, indicating that neither of the models are good enough by themselves for most people to be comfortable with them. The results of this study demonstrate that, as a whole, people are generally concerned about the privacy protections, or lack thereof, offered by always-listening voice assistants. This holds true despite the fact that these models may be too simplistic or incomplete; perhaps, users may have the perspective that something is better than nothing. Our findings show that consumers do seek to make choices to protect their privacy when considering new technologies, as demonstrated by the number of apps they installed, and is reinforced by explicitly inquiring about their consideration after browsing the store. Prevailing sentiments from our qualitative analysis show concerns about malicious third parties gaining access to sensitive data. We plan to write this up for CHI next quarter.

COMMUNITY ENGAGEMENTS

  • Nothing to report this period

EDUCATIONAL ADVANCES:

  • This project forms Nathan Malkin's Ph.D. thesis, which will be submitted in August 2021.