Science of Human Circumvention of Security - July 2014
Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.
PI(s): Tao Xie
Co-PI(s): Jim Blythe, Ross Koppel, Sean Smith
Researchers:
HARD PROBLEM(S) ADDRESSED
* Our project most closely aligns with problem 5, "Understanding and Accounting for Human Behavior." However, it also pertains to problems 1, 2, and 3:
o "Scalability and Composability": We want to understand not just the drivers of individual incidents of human circumvention, but also the net effect of these incidents. Included here are measures of the environment (physical, organizational, hierarchical, embeddedness within larger systems.)
o "Policy-Governed Secure Collaboration." In order to create policies that in reality actually enable secure collaboration among users in varying domains, we need to understand and predict the de facto consequences of policies, not just the de juro ones.
o "Security-Metrics-Driven Evaluation, Design, Development, and Deployment." Making sane decisions about what security controls to deploy requires understanding the de facto consequences of these deployments---instead of just pretending that circumvention by honest users never happens.
PUBLICATIONS
[1] V. Kothari, J. Blythe, S.W. Smith, and R. Koppel. "Agent-Based Modeling of User Circumvention of Security." ACySE '14: Proceedings of the 1st International Workshop on Agents and CyberSecurity. ACM. May 2014. Hard problem addressed: Human Behavior
ABSTRACT:
Security subsystems are often designed with flawed assumptions arising from system designers' faulty mental models. Designers tend to assume that users behave according to some textbook ideal, and to consider each potential exposure/interface in isolation. However, fieldwork continually shows that even well-intentioned users often depart from this ideal and circumvent controls in order to perform daily work tasks, and that "incorrect" user behaviors can create unexpected links between otherwise "independent" interfaces. When it comes to security features and parameters, designers try to find the choices that optimize security utility - except these flawed assumptions give rise to an incorrect curve, and lead to choices that actually make security worse, in practice.
We propose that improving this situation requires giving designers more accurate models of real user behavior and how it influences aggregate system security. Agent-based modeling can be a fruitful first step here. In this paper, we study a particular instance of this problem, propose user-centric techniques designed to strengthen the security of systems while simultaneously improving the usability of them, and propose further directions of inquiry.
ACCOMPLISHMENT HIGHLIGHTS
* The accomplishments in this quarter for this project highly coupled with the accomplishments in this quarter for this project's preceding SHUCS project (which is finishing) for the previous round of the SoS Lablet, given the nature of this project as the continuation of the preceding SHUCS project.
* The team is exploring using NLP/automatic text analysis on problem reports and change logs from partner IT departments (unearthed during our fieldwork), and also using these techniques on the open-ended responses to our questionnaire.