Visible to the public Science of Human Circumvention - July 2017Conflict Detection Enabled

Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.

PI(s): Tao Xie

Co-PI(s): Jim Blythe (USC), Ross Koppel (UPenn), and Sean Smith (Dartmouth)

HARD PROBLEM(S) ADDRESSED
This refers to Hard Problems, released November 2012.

Our project most closely aligns with problem 5, "Understanding and Accounting for Human Behavior." However, it also pertains to problems 1, 2, and 3:

  • Scalability and Composability: We want to understand not just the drivers of individual incidents of human circumvention, but also the net effect of these incidents. Included here are measures of the environment (physical, organizational, hierarchical, embeddedness within larger systems.)
  • Policy-Governed Secure Collaboration: In order to create policies that in reality actually enable secure collaboration among users in varying domains, we need to understand and predict the de facto consequences of policies, not just the de juro ones.
  • Security-Metrics-Driven Evaluation, Design, Development, and Deployment: Making sane decisions about what security controls to deploy requires understanding the de facto consequences of these deployments---instead of just pretending that circumvention by honest users never happens.

PUBLICATIONS
Papers published in this quarter as a result of this research. Include title, author(s), venue published/presented, and a short description or abstract. Identify which hard problem(s) the publication addressed. Papers that have not yet been published should be reported in region 2 below.

  • Ross Koppel, Jim Blythe, Vijay Kothari, and Sean Smith, "Password Logbooks and What Their Amazon Reviews Reveal About Their Users' Motivations, Beliefs, and Behaviors", 2nd European Workshop on Usable Security (EuroUSEC 2017), Paris, France, April 29, 2017.
  • Ross Koppel and Harold Thimbleby: Lessons from the 100 Nation Ransomware Attack. May 14, 2017 The HealthCare Blog (THCB) http://thehealthcareblog.com/
  • Haibing Zheng, Dengfeng Li, Xia Zeng, Beihai Liang, Wujie Zheng, Yuetang Deng, Wing Lam, Wei Yang, and Tao Xie, "Automated Test Input Generation for Android: Towards Getting There in an Industrial Case", 39th International Conference on Software Engineering (ICSE 2017), Software Engineering in Practice (SEIP), Buenos Aires, Argentina, May 20-28, 2017.
  • Christopher Novak, Jim Blythe, Ross Koppel, Vijay Kothari, and Sean Smith, "Modeling Aggregate Security with User Agents that Employ Password Memorization Techniques", Who Are You?! Adventures in Authentication (WAY 2017), workshop in conjunction with Symposium On Usable Privacy and Security (SOUPS 2017), July 12-14, 2017, Santa Clara, CA.
  • Benjamin Andow, Akhil Acharya, Dengfeng Li, William Enck, Kapil Singh, and Tao Xie, "UiRef: Analysis of Sensitive User Inputs in Android Applications", 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec 2017), Boston, MA, July 18-20, 2017.

Other Presentations

  • Dengfeng Li, Wing Lam, Wei Yang, Zhengkai Wu, Xusheng Xiao, and Tao Xie, "Towards Privacy-Preserving Mobile Apps: A Balancing Act", Poster, Symposium and Bootcamp on the Science of Security (HotSoS 2017), Hanover, MD, April 4-5, 2017.
  • Jim Blythe, Sean Smith, Ross Koppel, Christopher Novak, Vijay Kothari. "FARM: A Toolkit for Finding the Appropriate Level of Realism for Modeling." Poster, Symposium and Bootcamp on the Science of Security (HotSoS 2017), Hanover, MD, April 4-5, 2017.
  • Jim Blythe, Ross Koppel, Sean Smith, Vijay Kothari. "Analysis of Two Parallel Surveys on Cybersecurity: Users and Security Administrators--- notable similarities and differences." Poster, Symposium and Bootcamp on the Science of Security (HotSoS 2017), Hanover, MD, April 4-5, 2017.
  • Sean Smith, Ross Koppel, Jim Blythe, Vijay Kothari. "Flawed Mental Models Lead to Bad Cybersecurity Decisions: Let's Do a Better Job!" Poster, Symposium and Bootcamp on the Science of Security (HotSoS 2017), Hanover, MD, April 4-5, 2017.
  • Sean Smith, "User Circumvention of Cybersecurity: A Cross-Disciplinary Approach", DIMACS/Northeast Big Data Hub Workshop on Privacy and Security for Big Data, Piscataway, NJ, April 24-25, 2017.
  • Sean Smith, "The Internet of Risky Things: Trusting the Devices that Surround Us", University of New Hampshire, Durham, NH, April 20, 2017.
  • Jim Blythe. "Modeling Human Behavior to Improve Cyber Security", Presentation to the Loyola Marymount University MBA class on human decision-making, Los Angeles, CA, June 2017.
  • Sean Smith. "Cybersecurity Fundamentals", Cyber Resilient Energy Delivery Consortium (CREDC) Summer School, St Charles IL, June 12, 2017.
  • Sean Smith, "Cyber Resilience for EDS (my view)", Cyber Resilient Energy Delivery Consortium (CREDC) Summer School, St Charles IL, June 13, 2017.
  • Vijay Kothari (Moderator), Ross Koppel (Panelist), Shrirang Mare (Panelist), Scott Rudkin (Panelist), Harold Thimbleby (Panelist), ``On Developing Authentication Solutions for Healthcare Settings", Panel, Who Are You?! Adventures in Authentication (WAY 2017), workshop in conjunction with Symposium On Usable Privacy and Security (SOUPS 2017), July 12-14, 2017, Santa Clara, CA.
  • Ross Koppel, "Understanding Circumvention of Cybersecurity Authentication: Ridiculous Rules, Reasonable and Unreasonable Responses, and User Rationales", Presentation to the Joint NSF/Intel project on cybersecurity of the Internet of Things, Hillsboro, OR, August 9-11, 2017.

ACCOMPLISHMENT HIGHLIGHTS

Our goal is to improve aggregate security in light of rampant user circumvention of security policies and recommended security practices. We combine our interdisciplinary expertise to tackle this problem of human circumvention of security using various approaches, including, but not limited to, semiotic modeling, surveys, observations of in-situ behavior, analysis of user logs and help logs, behavioral experiments (including use of a Mechanical Turk), and agent-based simulation. We seek to (a) enlighten security practitioners as to what users think and do, (b) bridge disconnects between security practitioners' mental models and reality, (c) develop tools to aid in security decisions, and (d) suggest better security solutions.

Via fieldwork in real-world enterprises, we have been identifying and cataloging types and causes of circumvention by well-intentioned users. We are using help desk logs, records security-related computer changes, analysis of user behavior in situ, and surveys--in addition to interviews and observations. We then began to build and validate models of usage and circumvention behavior, for individuals and then for populations within an enterprise--as well as developing some typologies of the deeper patterns and causes. For example, we've adapted previous work in the area of semiotics to build a model to capture mismorphisms, disconnects between various actors' mental models, the system's representation of reality, and the reality itself. We believe improvements to this model may enable us to meaningfully classify hosts of security issues and suggest methods to address them.

We have been developing questionnaires for both high-level computer security professionals and general users. These results will improve our understanding of perceptions, attitudes, and behaviors of both security practitioners and general users. Indeed, results may improve security practitioners' decisions directly or indirectly by providing requisite data to build faithful models of human behavior that can inform security practitioners. We have conducted surveys on a small scale and have done initial analysis of results. We are now conducting surveys on a larger scale, and have in fact doubled the number of respondents in the past few months.

Using DASH, an agent-based modeling platform, we have built and are continually improving upon a password simulation for measuring the security provided by a password composition policy, taking into account human circumventions such as writing down and reusing passwords. We've also pursued development of a phishing simulation and other security-focused simulations. We continue to refine the model to improve its faithfulness to reality and usefulness. In particular, this quarter we have used the simulation to compare the efficacy of two password memorization techniques.

We have also been continually developing the aforementioned platform, DASH, for agent-based simulations of circumventive behavior in order to understand their causes and consequences. We have completed the re-implementation of DASH in Python and have built several agents on the new platform, including models for password behavior, authentication on shared computers and attackers, and phishing susceptibility.

We are also developing a DASH toolset, called FARM, that partially automates large-scale experiments with sufficient trials across many independent and dependent variables that are distributed across many computers. By providing an explicit representation for assumptions that underpin the agent and world representation, we will allow FARM to propose and execute experiments that explore the range of validity of conclusions drawn from simulation experiments. This is an essential step in using insights from simulations to propose and test new security software and policy.

We are building a platform to conduct password security experiments on Mechanical Turk that will provide data on why, how, and when users circumvent recommended password practices. We are nearing completion of final stages of testing and expect to perform these experiments very soon. A Dartmouth undergraduate will be working on this over the summer.

We are collaborating with researchers at University of Pennsylvania who specialize in simulating and checking Markov chain models. We are exploring ways to blend these Markov-based models with our DASH model to tackle security problems using ground-truth data from the Mechanical Turk experiments and the literature.

We have continued developing a privacy framework that enables a mobile app's developers to determine what sensitive information can be anonymized while maintaining a desirable level of utility efficacy. We presented the preliminary work in the Monthly UIUC/R2 Meeting in Feb 2017. We presented a poster on this work at the HotSoS 2017.

We have developed UiRef (User Input REsolution Framework), an automated approach for resolving the semantics of user inputs requested by mobile applications. This work is toward our long-term goal to understand the broad implications of such requests in terms of the type of sensitive data being requested by applications. This work was accepted to be presented in WiSec 2017.

We now list some accomplishment highlights from the latest quarter. (Section 2 below has a more complete presentation.)

  • Co-PIs Blythe and Koppel presented three posters at HotSoS 2017. These reflect updates and additions to previous works by our team.
  • Dartmouth undergraduate Christopher Novak, Co-PI Jim Blythe, Co-PI Ross Koppel, Dartmouth graduate student Vijay Kothari, and Co-PI Sean Smith submitted "Modeling Aggregate Security with User Agents that Employ Password Memorization Techniques" to the WAY workshop as part of SOUPS 2017. This work was accepted and will be presented on July 12, 2017 by Vijay Kothari.
  • Dartmouth graduate student Vijay Kothari, assisted by Co-PIs Ross Koppel, Jim Blythe, and Sean Smith, submitted a panel proposal, with co-authors/panelists Ross Koppel, Shrirang Mare, Scott Rudkin, and Harold Thimbleby, entitled "On Developing Authentication Solutions for Healthcare Settings" to the WAY workshop as part of SOUPS 2017. This panel proposal was accepted. The panel presentation will be approximately 65 minutes and will be held on July 12, 2017.
  • Dartmouth graduate student Kothari presented "Password Logbooks and What Their Amazon Reviews Reveal About Their Users' Motivations, Beliefs, and Behaviors" at EuroUSEC 2017 on April 29, 2017.
  • Co-PI Koppel published more relevant material in The HealthCare Blog.
  • Co-PI Koppel will be presenting results from this project at Joint NSF/Intel project on cybersecurity of the Internet of Things. Hillsboro, OR. August 9-11, 2017
  • Jim Blythe presented "Modeling Human Behavior to Improve Cyber Security" to an MBA class in human decision-making at Loyola Marymount University. The talk has been used to gather survey participants in the past.
  • Supported student Dengfeng Li presented one poster at HotSoS 2017 on the work-in-progress toward a framework for privacy-preserving mobile apps.
  • PI Xie co-presented a paper "Automated Test Input Generation for Android: Towards Getting There in an Industrial Case" at ICSE 2017. Such work provides a supporting test generation technique to assist the investigation of mobile application security and privacy.