Visible to the public Biblio

Filters: Author is Sean Smith, Dartmouth College  [Clear All Filters]
2017-07-18
Christopher Novak, Dartmouth College, Jim Blythe, University of Southern Califonia, Ross Koppel, University of Southern California, Vijay Kothari, Dartmouth College, Sean Smith, Dartmouth College.  2017.  Modeling Aggregate Security with User Agents that Employ Password Memorization Techniques. Symposium On Usable Privacy and Security (SOUPS 2017).

We discuss our ongoing work with an agent-based password simulation which models how site-enforced password requirements a ect aggregate security when people interact with multiple authentication systems. We model two password memorization techniques: passphrase generation and spaced repetition. Our simulation suggests system-generated passphrases lead to lower aggregate security across services that enforce even moderate password requirements. Furthermore, allowing users to expand their password length over time via spaced repetition increases aggregate security.

Ross Koppel, University of Southern California, Jim Blythe, University of Southern Califonia, Vijay Kothari, Dartmouth College, Sean Smith, Dartmouth College.  2017.  Password Logbooks and What Their Amazon Reviews Reveal About the Users’ Motivations, Beliefs, and Behaviors. 2nd European Workshop on Useable Security (EuroUSEC 2017).

The existence of and market for notebooks designedfor users to write down passwords illuminates a sharp contrast: what is often prescribed as proper password behavior—e.g., never write down passwords—differs from what many users actually do. These password logbooks and their reviews provide many unique and surprising insights into their users’ beliefs, motivations, and behaviors. We examine the password logbooks and analyze, using grounded theory, their reviews, to better understand how these users think and behave with respectto password authentication. Several themes emerge including: previous password management strategies, gifting, organizational strategies, password sharing, and dubious security advice. Some users argue these books enhance security.

2017-04-21
2017-04-03
2016-12-16
Jim Blythe, University of Southern California, Ross Koppel, University of Pennsylvania, Sean Smith, Dartmouth College.  2013.  Circumvention of Security: Good Users Do Bad Things.

Conventional wisdom is that the textbook view describes reality, and only bad people (not good people trying to get their jobs done) break the rules. And yet it doesn't, and good people circumvent.
 

Published in IEEE Security & Privacy, volume 11, issue 5, September - October 2013.

2016-12-09
Jim Blythe, University of Southern California, Sean Smith, Dartmouth College.  2015.  Understanding and Accounting for Human Behavior.

Since computers are machines, it's tempting to think of computer security as purely a technical problem. However, computing systems are created, used, and maintained by humans, and exist to serve the goals of human and institutional stakeholders. Consequently, effectively addressing the security problem requires understanding this human dimension.


In this tutorial, we discuss this challenge and survey principal research approaches to it.
 

Invited Tutorial, Symposium and Bootcamp on the Science of Security (HotSoS 2015), April 2015, Urbana, IL.

2016-07-13
Ross Koppel, University of Pennsylvania, Jim Blythe, University of Southern California, Vijay Kothari, Dartmouth College, Sean Smith, Dartmouth College.  2016.  Beliefs about Cybersecurity Rules and Passwords: A Comparison of Two Survey Samples of Cybersecurity Professionals Versur Regular Users. 12th Symposium On Usable Privacy and Security.

In this paper we explore the differential perceptions of cybersecurity professionals and general users regarding access rules and passwords. We conducted a preliminary survey involving 28 participants: 15 cybersecurity professionals and 13 general users. We present our preliminary findings and explain how such survey data might be used to improve security in
practice. We focus on user fatigue with access rules and passwords.
 

Bruno Korbar, Dartmouth College, Jim Blythe, University of Southern California, Ross Koppel, University of Pennsylvania, Vijay Kothari, Dartmouth College, Sean Smith, Dartmouth College.  2016.  Validating an Agent-Based Model of Human Password Behavior. AAAI-16 Workshop on Artificial Intelligence for Cyber Security .

Effective reasoning about the impact of security policy decisions requires understanding how human users actually behave, rather than assuming desirable but incorrect behavior. Simulation could help with this reasoning, but it requires building computational models of the relevant human behavior and validating that these models match what humans actually do. In this paper we describe our progress on building agent-based models of human behavior with passwords, and we demonstrate how these models reproduce phenomena
shown in the empirical literature.
 

2015-11-17
Sean Smith, Dartmouth College, Ross Koppel, University of Pennsylvania, Jim Blythe, University of Southern California, Vijay Kothari, Dartmouth College.  2015.  Mismorphism: A Semiotic Model of Computer Security Circumvention (poster abstract). Symposium and Bootcamp on the Science of Security (HotSoS).

In real world domains, from healthcare to power to finance, we deploy computer systems intended to streamline and im- prove the activities of human agents in the corresponding non-cyber worlds. However, talking to actual users (instead of just computer security experts) reveals endemic circum- vention of the computer-embedded rules. Good-intentioned users, trying to get their jobs done, systematically work around security and other controls embedded in their IT systems.

This poster reports on our work compiling a large corpus of such incidents and developing a model based on semi- otic triads to examine security circumvention. This model suggests that mismorphisms—mappings that fail to preserve structure—lie at the heart of circumvention scenarios; dif- ferential perceptions and needs explain users’ actions. We support this claim with empirical data from the corpus.

2015-11-16
Vijay Kothari, Dartmouth College, Jim Blythe, University of Southern California, Ross Koppel, University of Pennsylvania, Sean Smith, Dartmouth College.  2015.  Measuring the Security Impacts of Password Policies Using Cognitive Behavioral Agent Based Modeling. Symposium and Bootcamp on the Science of Security (HotSoS).

Agent-based modeling can serve as a valuable asset to security personnel who wish to better understand the security landscape within their organization, especially as it relates to user behavior and circumvention. In this paper, we ar- gue in favor of cognitive behavioral agent-based modeling for usable security, report on our work on developing an agent- based model for a password management scenario, perform a sensitivity analysis, which provides us with valuable insights into improving security (e.g., an organization that wishes to suppress one form of circumvention may want to endorse another), and provide directions for future work.

Sean Smith, Dartmouth College, Ross Koppel, University of Pennsylvania, Jim Blythe, University of Southern California, Vijay Kothari, Dartmouth College.  2015.  Mismorphism: A Semiotic Model of Computer Security Circumvention.

In real world domains, from healthcare to power to finance, we deploy computer systems intended to streamline and improve the activities of human agents in the corresponding non-cyber worlds. However, talking to actual users (instead of just computer security experts) reveals endemic circumvention of the computer-embedded rules. Good-intentioned users, trying to get their jobs done, systematically work around security and other controls embedded in their IT systems.

This paper reports on our work compiling a large corpus of such incidents and developing a model based on semiotic triads to examine security circumvention. This model suggests that mismorphisms— mappings that fail to preserve structure—lie at the heart of circumvention scenarios; differential percep- tions and needs explain users’ actions. We support this claim with empirical data from the corpus.