Visible to the public Understanding Developers' Reasoning about Privacy and Security - UMD - July 2016Conflict Detection Enabled

PI(s): Michelle Mazurek, Charalampos Papamanthou, Mohit Tiwari
Researchers: Casen Hunger, Doowon Kim, Yehuda Katz

 

PROJECT GOAL

Our goal is to discover, understand, and quantify challenges that developers face in writing secure and privacy preserving programs. Several research thrusts enable this goal.

Qualitative studies of developers allow us to discover cultural and workplace dynamics that encourage or discourage privacy and security by design. Experiments with alternative design schemas enable us to test how to best facilitate adoption. To perform these studies, we are developing the Bubbles platform that will serve as a place for the developers to write privacy-preserving applications that respect the privacy decisions that have been taken by users.

Understanding design settings:

Interviews with, and observations of, application developers can discover factors within design settings (such as work practices, institutional arrangements, or social norms) which encourage developers to value privacy and security design, and adopt techniques to protect privacy and security. We will conduct interviews with professional developers in a diversity of development settings (small and large companies, contractors and independent developers) in Washington DC and Silicon Valley. And we will observe design meetings at companies as well as hackathons. Analyzing field notes and transcripts of interviews will reveal how developers discover and learn about new privacy and security techniques, what encourages developers to adopt new privacy and security practices, and how application developers make choices between privacy, security and other priorities.

 

Understanding users' behavior:

In order to help developers make the right choices when writing privacy-preserving applications, we need to understand what the privacy needs of the users are. We plan to observe and document how users make decisions when it comes to maintaining their privacy. Towards that goal we will perform interviews and will also analyze how users cluster and share their personal data such as emails or photo folders. 

 

Facilitating adoption:

Techniques such as information-flow control can offer strong privacy guarantees but have failed to achieve traction among developers. Concepts such as lattices of security labels and scrubbing implicit flow leaks from programs require developers to learn security concepts in order to work correctly on an information-flow secure platform (Jif, Flume). We have developed an alternative scheme that requires developers to partition their apps based on functionality (analogous to a model-view-controller pattern) instead of using labels and information-flow secure compilers. We will conduct developer studies using A-B testing to determine the ease of programming using information flow versus our programming model. Similarly, we will study design patterns for security features in applications. For example, privilege separation in applications, key management in a distributed application, mandatory access control policies for app components.  These design patterns will enable even non-security-expert developers to write secure & private applications by default.

HARD PROBLEM(S) ADDRESSED

Human behavior

PUBLICATIONS

Krontiris, I., Langheinrichz, M. & Shilton, K. (2014). Trust and Privacy in Mobile Experience Sharing - Future Challenges and Avenues for Research. IEEE Communications, August 2014. http://cps-vo.org/node/17109

Martin, K. and Shilton, K. (in press) "Why Experience Matters To Privacy: How Context-Based Experience Moderates Consumer Privacy Expectations for Mobile Applications." Journal of the Association for Information Science & Technology.

ACCOMPLISHMENT HIGHLIGHTS

Bubbles User Study

The team has continued developing the survey to evaluate the usability of the Bubbles platform. To conduct the study, the survey collects a participant’s Google Drive, Gmail, and Google Calendar data. Using Machine Leaning techniques, the survey then infers logical groupings of data across all applications all shared to the same set of users. We then ask participants to evaluate the accuracy of the groupings.

We have refined the user experience and Machine Learning process. Based on our own testing and piloting on friends, we identified changes to the survey process. We found a minimum number of emails required to properly create bubbles and we adjusted the actual data collection process so participants wouldn't need to wait too long (lowering the risk they would abandon the survey). We also made significant changes to the underlying framework of the project to make it more reliable. Because people use email in so many diffent ways, this project needs to be flexible to properly collect data from the general public.

The team has ammended applications to The University of Maryland Institutional Review Board. The University of Texas IRB determined that no approval was necessary for this project. We are currently awaiting final approval to begin the study from UMD and DoD. Once approved, the team will begin collecting data using Amazon Turk.