Visible to the public Operationalizing Contextual Integrity - July 2022Conflict Detection Enabled

PI(s), Co-PI(s), Researchers:

  • Serge Egelman (ICSI)
  • Primal Wijesekera (ICSI)
  • Julia Bernd (ICSI)
  • Helen Nissenbaum (Cornell Tech)

HARD PROBLEM(S) ADDRESSED
Human Behavior, Metrics, Policy-Governed Secure Collaboration, and Scalability and Comporsability.

PUBLICATIONS

  • Accepted:
    Nathan Malkin, David Wagner, and Serge Egelman. Runtime Permissions for Privacy in Proactive Intelligent Assistants. In Proceedings of the 18th Symposium on Usable Privacy and Security (SOUPS '22). USENIX Assoc., Berkeley, CA, USA. 2022.
  • Accepted:
    Nathan Malkin, David Wagner, and Serge Egelman. 2022. Can Humans Detect Malicious Always-Listening
    Assistants? A Framework for Crowdsourcing Test Drives. Proc. ACM Hum.-Comput. Interact. 6, CSCW2,
    Article 500 (November 2022), 44 pages.
  • Accepted:
    Julia Bernd, Ruba Abu-Salma, Junghyun Choy, and Alisa Frik. Balancing Power Dynamics in Smart Homes: Nannies' Perspectives on How Cameras Reflect and Affect Relationships. In Proceedings of the 18th Symposium on Usable Privacy and Security (SOUPS '22). USENIX Assoc., Berkeley, CA, USA. 2022.
  • Presented:
    Qasim Lone, Alisa Frik, Matthew Luckie, M Korczynski, Michel van Eeten, and Carlos Ganan. Deployment of Source Address Validation by Network Operators: A Randomized Control Trial. In Proceedings of the IEEE Symposium on Security and Privacy (Oakland '22), 2022.

KEY HIGHLIGHTS

  • We have been collecting data from ~30 ad networks to analyze bid data under different contexts, to examine whether or not privacy controls are functioning as expected. Right now, our experiments have been focused on whether pricing changes as a function of "opt-out" flags (which would normally be sent by apps). We have several related experiments that we plan to conduct over the summer and into the fall.
  • The reporting period was substantially spent revising all of our accepted papers for presentation, as well as preparing those presentations.

  • Both SOUPS papers were accepted! Citations above. Abstracts:

    • As technology advances, intelligent voice assistants are likely to gain proactive features: offering suggestions without users directly invoking them. Such behavior will exacerbate privacy concerns, since proactive operation requires continuous monitoring of users' conversations.
      To mitigate this problem, our study proposes and evaluates one potential privacy control, in which the assistant requests a user's permission for the information it wishes to use immediately after hearing it.

      To find out how people would react to runtime permission requests, we recruited 23 pairs of participants to hold conversations while receiving ambient suggestions from a proactive assistant, which we simulated in real time using the Wizard of Oz technique. The interactive sessions featured different modes and designs of runtime permission requests and were followed by in-depth interviews about people's preferences and concerns. Most participants were excited about the devices despite their continuous listening, but wanted control over the assistant's actions and their own data. They generally prioritized an interruption-free experience above more fine-grained control over what the device would hear.

    • Smart home cameras raise privacy concerns in part because they frequently collect data not only about the primary users who deployed them but also other parties--who may be targets of intentional surveillance or incidental bystanders. Domestic employees working in smart homes must navigate a complex situation that blends privacy and social norms for homes, workplaces, and caregiving. This paper presents findings from 25 semi-structured interviews with domestic childcare workers in the U.S. about smart home cameras, focusing on how privacy considerations interact with the dynamics of their employer-employee relationships. We show how participants' views on camera data collection, and their desire and ability to set conditions on data use and sharing, were affected by power differentials and norms about who should control information flows in a given context. Participants' attitudes about employers' cameras often hinged on how employers used the data; whether participants viewed camera use as likely to reinforce negative tendencies in the employer-employee relationship; and how camera use and disclosure might reflect existing relationship tendencies. We also suggest technical and social interventions to mitigate the adverse effects of power imbalances on domestic employees' privacy and individual agency.

  • Our CSCW paper was also accepted! Abstract:

    • Intelligent voice assistants are growing in popularity and functionality. Continuous listening is one feature on the horizon. With this capability, malicious actors could train assistants to listen to audio outside their purview, harming users' privacy and security. How can this misbehavior be detected? In many cases, identification may rely on human abilities. But how good are humans at this task? To investigate, we developed a Wizard of Oz interface that allowed users to perform real-time "Test Drives" of three different always-listening services. We then conducted a study with 200 participants, seeing whether they could detect one of four types of malicious apps. We studied the behavior of individuals, as well as groups working collaboratively, also investigating the effects of task framing on performance. Our paper reports on people's effectiveness and their experiences with this novel transparency mechanism.

COMMUNITY ENGAGEMENTS

  • Alisa's Oakland paper was presented.

EDUCATIONAL ADVANCES:

  • Several undergraduate and graduate students assisted with this research.