Operationalizing Contextual Integrity - January 2022
PI(s), Co-PI(s), Researchers:
- Serge Egelman (ICSI)
- Primal Wijesekera (ICSI)
- Nathan Malkin (UCB)
- Julia Bernd (ICSI)
- Helen Nissenbaum (Cornell Tech)
HARD PROBLEM(S) ADDRESSED
Human Behavior, Metrics, Policy-Governed Secure Collaboration, and Scalability and Comporsability.
PUBLICATIONS
- See below.
KEY HIGHLIGHTS
- We have begun designing and deploying experiments to examine how user privacy controls impact users' privacy in practice within the online advertising ecosystem. That is, do "opt out" mechanisms do what users expect? Are users being tracked in spite of privacy settings? To examine this, we are performing black box testing on advertising API endpoints: we are building infrastructure to conduct realtime ad auctions, treating the price and content of the resulting ad as independent variables, while treating the inputs (e.g., users' privacy settings, presence of personal information, etc.) as dependent variables. For example, this allows us to send two identical ad requests that only differ based on "do not track" requests: if these requests (and the underlying privacy controls) are being honored, we would expect the bid prices to be significantly higher for the ad requests that did not receive the "do not track" requests.
At this stage, we have completed building the infrastructure: we have built scripts to mimic popular ad network APIs so that we can spoof requests as originating from SDKs embedded within mobile apps and on websites, without actually having to generate website or mobile app interactions. We have also added a real Android app to the Play Store, so that we can sign up for ad network APIs that perform vetting. We expect to have data on ad networks' privacy behaviors by the end of the semester.
-
We submitted a paper to SOUPS in which we evaluate a new type of privacy control for passive-listening in-home devices. Abstract:
-
As technology advances, intelligent voice assistants are likely to gain proactive features: offering suggestions without users directly invoking them. Such behavior will exacerbate privacy concerns, since proactive operation requires continuous monitoring of users' conversations.
To mitigate this problem, our study proposes and evaluates one potential privacy control, in which the assistant requests a user's permission for the information it wishes to use immediately after hearing it.To find out how people would react to runtime permission requests, we recruited 23 pairs of participants to hold conversations while receiving ambient suggestions from a proactive assistant, which we simulated in real time using the Wizard of Oz technique. The interactive sessions featured different modes and designs of runtime permission requests and were followed by in-depth interviews about people's preferences and concerns. Most participants were excited about the devices despite their continuous listening, but wanted control over the assistant's actions and their own data. They generally prioritized an interruption-free experience above more fine-grained control over what the device would hear.
-
-
We submitted a separate, but related paper to CSCW. Abstract:
-
Intelligent voice assistants are growing in popularity and functionality. Continuous listening is one feature on the horizon. With this capability, malicious actors could train assistants to listen to audio outside their purview, harming users' privacy and security. How can this misbehavior be detected? In many cases, identification may rely on human abilities. But how good are humans at this task? To investigate, we developed a Wizard of Oz interface that allowed users to perform real-time "Test Drives" of three different always-listening services. We then conducted a study with 200 participants, seeing whether they could detect one of four types of malicious apps. We studied the behavior of individuals, as well as groups working collaboratively, also investigating the effects of task framing on performance. Our paper reports on people's effectiveness and their experiences with this novel transparency mechanism.
-
COMMUNITY ENGAGEMENTS
-
Nothing to report this period
EDUCATIONAL ADVANCES:
- This project forms Nathan Malkin's Ph.D. thesis, which was defended in August 2021.