Visible to the public Operationalizing Contextual Integrity - January 2023Conflict Detection Enabled

PI(s), Co-PI(s), Researchers:

  • Serge Egelman (ICSI)
  • Primal Wijesekera (ICSI)
  • Julia Bernd (ICSI)
  • Helen Nissenbaum (Cornell Tech)

HARD PROBLEM(S) ADDRESSED
Human Behavior, Metrics, Policy-Governed Secure Collaboration, and Scalability and Comporsability.

PUBLICATIONS

  • Presented:
    Nathan Malkin, David Wagner, Serge Egelman. "Can Humans Detect Malicious Always-Listening Assistants? A Framework for Crowdsourcing Test Drives." Proc. ACM Hum.-Comput. Interact., Vol. 6, No. CSCW2, Article 500. Publication date: November 2022. CSCW 2022.
  • Accepted:
    C. Gilsenan, F. Shakir, N. Alomar, and S. Egelman. "Security and Privacy Failures in Popular 2FA Apps." Proceedings of the 2023 USENIX Security Symposium.
  • Under Submission:
    Nikita Samarin, Shayna Kothari, Zaina Siyed, Oscar Bjorkman, Reena Yuan, Primal Wijesekera, Noura Alomar, Jordan Fischer, Chris Hoofnagle, and Serge Egelman. "Measuring the Compliance of Android App Developers with the California Consumer Privacy Act (CCPA)." Proceedings on Privacy Enhancing Technologies (PETS) 2023.
  • Published:
    N. Malkin. "Contextual Integrity, Explained: A More Usable Privacy Definition." IEEE Security & Privacy, doi: 10.1109/MSEC.2022.3201585. (indirectly supported by this project).

KEY HIGHLIGHTS

Do privacy controls function as expected?
We continue to collect data from mobile ad networks (using RTB) to analyze bid data under different contexts, to examine whether or not privacy controls are functioning as expected. Right now, our experiments have been focused on whether pricing changes as a function of "opt-out" flags (which would normally be sent by apps). We have several related experiments that we are conducting and continue to work towards a publication.


Can Humans Detect Malicious Always-Listening Assistants?
As intelligent voice assistants move toward continuous listening modes, new privacy and security concerns arise. To maintain privacy, apps should hear only things that are relevant to them. However, because determining what is relevant to a specific app remains a difficult problem for natural language processing algorithms, human judgment is currently the best tool for this task. To find out whether people can detect this type of malicious activity, we developed a Wizard of Oz interface that allows users to test drive three different always-listening services. We then used this interface to conduct a study with 200 participants to find out whether individuals or groups working collaboratively could detect one of four types of malicious apps. For individuals, successful detection of malicious apps varied widely (from 7.7% to 75%), depending on the type of malicious attack. However, groups were highly successful when considered collectively. This study showed that our test drive framework can be used to effectively study user behaviors and concerns and could be a useful addition to voice assistant app stores, where it could decrease privacy concerns surrounding always-listening services.
Our paper on this study "Can Humans Detect Malicious Always-Listening Assistants? A Framework for Crowdsourcing Test Drives" was presented in November at the 25th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW). CSCW brings together top researchers and practitioners to explore the technical, social, material, and theoretical challenges of designing technology to support collaborative work and life activities.


Third-party 2FA TOTP authenticator apps
In this work, we used dynamic analysis tools to examine the security of network backups and then reverse-engineered their security protocols. The Time-based One-Time Password (TOTP) algorithm is a 2FA method widely deployed because of its relatively low implementation costs and purported security benefits compared to SMS 2FA. However, users of TOTP 2FA apps must maintain access to the secrets stored within the TOTP app or risk getting locked out of their accounts. To help users avoid this fate, popular TOTP apps implement a wide range of backup mechanisms, each with varying security and privacy implications. For this study we identified all general-purpose Android TOTP apps in the Google Play Store that had at least 100k installs and implemented a backup mechanism. Most of the 22 apps identified used backup strategies that placed trust in the same technologies they are meant to supersede: passwords and SMS. Also, many backup implementations shared personal user information with third parties, had serious flaws in the implementation and/or usage of cryptography, and allowed the app developers access to users' TOTP secrets. Our paper detailing this work "Security and Privacy Failures in Popular 2FA Apps" has been accepted for presentation at the 2023 USENIX Security Symposium. USENIX brings together researchers, practitioners, system administrators, system programmers, and others who are interested in the latest advances in the security and privacy of computer systems and networks.

For the last few months, we have worked with the vendors named in that paper as part of responsible vulnerability disclosure. We disclosed our findings about the issues we found in their apps and offered suggestions on how to fix them. Some of the vendors listened and have fixed the issues in their apps, while others didn't.


California Consumer Privacy Act (CCPA) compliance
In this work, we investigated CCPA compliance for top-ranked Android mobile app developers from the U.S. Google Play Store. CCPA requires developers to provide accurate privacy notices and to respond to "right to know" requests by disclosing personal information that they have collected, used, or shared about consumers for a business or commercial purpose. Of the 69 app developers who substantively replied to our requests, all but one provided specific pieces of personal data (as opposed to only categorical information). We found that a significant percentage of apps collected information that was not disclosed. This information included identifiers (55 apps, 80%), geolocation data (21 apps, 30%), and sensory data (18 apps, 26%). We identified several improvements to the CCPA that could help app developers comply.


After major revisions, and based on current reviews, we expect that our paper on this work "Measuring the Compliance of Android App Developers with the California Consumer Privacy Act (CCPA)" will be accepted for presentation at the 23rd Privacy Enhancing Technologies Symposium (PETS). We were required to add an ethics statement explaining that our IRB declined to review the study because it does not involve human subjects. PETS brings together privacy experts from around the world to discuss recent advances and new perspectives on research in privacy technologies.


Contextual integrity
Nathan Malkin, whose PhD was partially funded on this project, wrote an IEEE Security & Privacy article on contextual integrity. The article introduces the theory of contextual integrity and its main ideas, explains its usefulness, and discusses how it can be applied. Malkin completed his PhD last year.

COMMUNITY ENGAGEMENTS

EDUCATIONAL ADVANCES:

  • Several undergraduate and graduate students assisted with this research.
  • Nikita Samarin is finishing his thesis proposal, which will center on the CCPA work detailed above. He is expected to graduate within the next two years.