Over the past decade, people have realized that failure to account for human factors has resulted in many software security problems. Yet, when software does feature user-centric design, it takes into account average user behavior rather than catering to the individual. Thus, systems designers have gone from designing for security experts to now appealing to the least common denominator. The goal of this project is to examine the ways in which security mitigations can be tailored to individuals, and how this is likely to result in even greater security compliance than what has been previously achieved through user-centric design. Specifically, this research focuses on demonstrating how security mitigations can be tailored to individuals through indirect measurements and inferences of individual differences. This research could help security and privacy engineers develop more personalized and salient means to alert users to security and privacy risks, which could increase users' compliance with security messaging and therefore reduce threats to users and their organizations.
The challenge to personalizing security mitigations is to infer the individual differences between users that are predictive of whether they are likely to respond more favorably to one mitigation design over another. This approach relies on using well-studied individual differences in the psychology and decision-making literature that are predictive of compliance to computer security mitigations. Building on extensive work on choice architecture and "nudges," this research aims to personalize security mitigations to specific user traits in order to be able to dynamically present each user with the security "nudge" that would be most effective for her. For example, if the target user measures high on decision-making "dependence" (i.e., looking to others for advice), the system might state the number of experts who selected the recommended option. Specifically, the researchers focus on framing the following types of security mitigations based on users' psychometric traits: smartphone/tablet lock screen enrollment, password creation instructions, web browser warnings, and software update notices. Their goal is to implement systems that infer the ways in which users are likely to respond to particular security mitigation designs and then tailor security environments accordingly.
Serge Egelman is Research Director of the Usable Security & Privacy Group at the International Computer Science Institute (ICSI) and also holds an appointment in the Department of Electrical Engineering and Computer Sciences (EECS) at the University of California, Berkeley. He leads the Berkeley Laboratory for Usable and Experimental Security (BLUES), which is the amalgamation of his ICSI and UCB research groups. Serge's research focuses on the intersection of privacy, computer security, and human-computer interaction, with the specific aim of better understanding how people make decisions surrounding their privacy and security, and then creating data-driven improvements to systems and interfaces. This has included human subjects research on social networking privacy, access controls, authentication mechanisms, web browser security warnings, and privacy-enhancing technologies. His work has received multiple best paper awards, including seven ACM CHI Honorable Mentions, the 2012 Symposium on Usable Privacy and Security (SOUPS) Distinguished Paper Award for his work on smartphone application permissions, as well as the 2017 SOUPS Impact Award, and the 2012 Information Systems Research Best Published Paper Award for his work on consumers' willingness to pay for online privacy. He received his PhD from Carnegie Mellon University and prior to that was an undergraduate at the University of Virginia. He has also performed research at NIST, Brown University, Microsoft Research, and Xerox PARC.