Visible to the public Operationalizing Contextual Integrity - April 2021Conflict Detection Enabled

PI(s), Co-PI(s), Researchers:

  • Serge Egelman (ICSI)
  • Primal Wijesekera (ICSI)
  • Nathan Malkin (UCB)
  • Julia Bernd (ICSI)
  • Helen Nissenbaum (Cornell Tech)

HARD PROBLEM(S) ADDRESSED
Human Behavior, Metrics, Policy-Governed Secure Collaboration, and Scalability and Comporsability.

PUBLICATIONS

  • We submitted a paper to IMWUT (ne Ubicomp)

KEY HIGHLIGHTS

  • We are continuing to perform a series of studies to examine consumers' acceptance of future "always-listening" devices within the home, using design features of these fictitious devices--though likely coming soon--as different variables to test participants' privacy concerns. We designed an interactive voice assistant app store experience to use in concert with a traditional survey. We randomly assigned participants to conditions that differed based on how app permissions were presented (e.g., as specific data types collected, using examples of data collected, through user reviews, etc.). Our findings show that participants as a whole installed more apps and indicated preference for voice assistants with privacy models. On a more detailed level, however, despite over half of participants indicating that privacy was a reason for preferring one of the two conditions they were assigned, the majority of participants did not look at the permissions for the apps they installed.
  • Started designing a study of in-home personal assistants, privacy, and the types of features that people would like to use (while balancing those privacy concerns). We decided to apply to the "Experience Sampling Method" (ESM) to periodically survey people about the conversations that they just had within their homes, and their level of comfort with having those conversations disclosed to devices and services. This involved creating a software prototype to randomly survey people on their phones. This project is ongoing.
  • A related study to the above involves showing people snippets of human conversations and asking them their level of comfort of having either humans or machines access those conversations for various purposes. The goal is to identify the types of topics and cues that could be used by classifiers to automatically apply privacy protections for in-home data capture devices (e.g., future smart assistants). We are working to use this data as a training set: in this stage, we're using human coders to code the ground truth data, that will later be used by the classifier.
  • Based on our prior work on providing privacy controls for always-listening devices that leverage CI theory, we observed that users desire audit mechanisms, so that they can examine what decisions have been previously made about their privacy. Using these audit mechanisms, users could theoretically determine whether or not an app was maliciously collecting data. A natural concern about this transparency approach is regarding its efficacy. Would it enable users to catch malicious or misbehaving apps? Or would privacy violations remain undetected? We performed a study in which we simulated different types of privacy auditing mechanisms to evaluate their effectiveness.

    We show that providing users with feedback and examples of the types of data apps may collect is an effective method for helping them detect malicious apps that may cause privacy violations. We also show that these techniques are likely applicable to other domains that rely on machine learning and which offer opportunities to decompose larger problems into smaller, self-contained tasks that are amenable to human verification. Examples of these may include machine translation, detection of toxic comments, household robots, or even self-driving cars. In each of these, it is possible to define an isolated test instance, examine the model's behavior under these circumstances, and subject its choices to human scrutiny to see if it is behaving in a potentially malicious manner. In fact, this notion of separability may itself be a lesson for the design of Artificial Intelligence: to make an AI that is understandable, trustworthy, and can be shown to not be malicious, design it in a way that allows its users to take it for a Test Drive. In particular, a key requirement is that the functionality being tested is stateless, so that it cannot game the system by altering its behavior based on time or usage level. We submitted this paper to IMWUT in February.

COMMUNITY ENGAGEMENTS

  • Nothing to report this period

EDUCATIONAL ADVANCES:

  • This project is forming the basis of Nathan Malkin's Ph.D. thesis.