ACM CHI Conference on Human Factors in Computing Systems - Toronto, Canada
SoS Newsletter- Advanced Book Block
ACM CHI Conference on Human Factors in Computing Systems CHI 2014 was held in Toronto, Canada from April 26- May 1. Papers shown below were selected based on their relevance to Human Behavior and Cybersecurity, and were presented in various sessions including Social Local Mobile; Privacy; Risks and Security; and Authentication and Passwords.
Session: Social Local Mobile
Let's Do It at My Place Instead? Attitudinal and Behavioral Study of Privacy in Client-Side Personalization
Alfred Kobsa, Bart P. Knijnenburg, Benjamin Livshits
Many users welcome personalized services, but are reluctant to provide the information about themselves that personalization requires. Performing personalization exclusively at the client side (e.g., on one's smartphone) may conceptually increase privacy, because no data is sent to a remote provider. But does client-side personalization (CSP) also increase users' perception of privacy? We developed a causal model of privacy attitudes and behavior in personalization, and validated it in an experiment that contrasted CSP with personalization at three remote providers: Amazon, a fictitious company, and the "Cloud". Participants gave roughly the same amount of personal data and tracking permissions in all four conditions. A structural equation modeling analysis reveals the reasons: CSP raises the fewest privacy concerns, but does not lead in terms of perceived protection nor in resulting self-anticipated satisfaction and thus privacy-related behavior. Encouragingly, we found that adding certain security features to CSP is likely to raise its perceived protection significantly. Our model predicts that CSP will then also sharply improve on all other privacy measures.
Keywords: Privacy; personalization; client-side; structural equation modeling (SEM); attitudes; behaviors (ID#:14-3342)
URL: http://dl.acm.org/citation.cfm?id=2557102 or http://dx.doi.org/10.1145/2556288.2557102
The Effect of Developer-Specified Explanations for Permission Requests on Smartphone User Behavior
Joshua S Tan, Khanh Nguyen, Michael Theodorides, Heidi Negron-Arroyo, Christopher Thompson, Serge Egelman, David Wagner
In Apple's iOS 6, when an app requires access to a protected resource (e.g., location or photos), the user is prompted with a permission request that she can allow or deny. These permission request dialogs include space for developers to optionally include strings of text to explain to the user why access to the resource is needed. We examine how app developers are using this mechanism and the effect that it has on user behavior. Through an online survey of 772 smartphone users, we show that permission requests that include explanations are significantly more likely to be approved. At the same time, our analysis of 4,400 iOS apps shows that the adoption rate of this feature by developers is relatively small: around 19 % of permission requests include developer-specified explanations. Finally, we surveyed 30 iOS developers to better understand why they do or do not use this feature.
Keywords: Smartphones; Privacy; Access Control; Usability (ID#:14-3343)
URL: http://dl.acm.org/citation.cfm?id=2557400 or http://dx.doi.org/10.1145/2556288.2557400
Effects of Security Warnings and Instant Gratification Cues on Attitudes toward Mobile Websites
Bo Zhang, Mu Wu, Hyunjin Kang, Eun Go, S. Shyam Sundar
In order to address the increased privacy and security concerns raised by mobile communications, designers of mobile applications and websites have come up with a variety of warnings and appeals. While some interstitials warn about potential risk to personal information due to an untrusted security certificate, others attempt to take users' minds away from privacy concerns by making tempting, time-sensitive offers. How effective are they? We conducted an online experiment (N = 220) to find out. Our data show that both these strategies raise red flags for users--appeals to instant gratification make users more leery of the site and warnings make them perceive greater threat to personal data. Yet, users tend to reveal more information about their social media accounts when warned about an insecure site. This is probably because users process these interstitials based on cognitive heuristics triggered by them. These findings hold important implications for the design of cues in mobile interfaces.
Keywords: Online privacy; security; information disclosure; trust; mobile interface. (ID#:14-3344)
URL: http://dl.acm.org/citation.cfm?id=2557347 or http://dx.doi.org/10.1145/2556288.2557347
Session: Privacy
Leakiness and Creepiness in App Space: Perceptions of Privacy and Mobile App Use
Irina A Shklovski, Scott D. Mainwaring, Halla Hrund Skuladottir, Hoskuldur Borgthorsson
Mobile devices are playing an increasingly intimate role in everyday life. However, users can be surprised when in-formed of the data collection and distribution activities of apps they install. We report on two studies of smartphone users in western European countries, in which users were confronted with app behaviors and their reactions assessed. Users felt their personal space had been violated in "creepy" ways. Using Altman's notions of personal space and territoriality, and Nissenbaum's theory of contextual integrity, we account for these emotional reactions and suggest that they point to important underlying issues, even when users continue using apps they find creepy.
Keywords: Mobile devices; data privacy; bodily integrity;learned helplessness; creepiness (ID#:14-3345)
URL: http://dl.acm.org/citation.cfm?id=2557421 or http://dx.doi.org/10.1145/2556288.2557421
A Field Trial of Privacy Nudges for Facebook
Yang Wang, Pedro Giovanni Leon, Alessandro Acquisti, Lorrie Faith Cranor, Alain Forget, Norman Sadeh
Anecdotal evidence and scholarly research have shown that Internet users may regret some of their online disclosures. To help individuals avoid such regrets, we designed two modifications to the Facebook web interface that nudge users to consider the content and audience of their online disclosures more carefully. We implemented and evaluated these two nudges in a 6-week field trial with 28 Facebook users. We analyzed participants' interactions with the nudges, the content of their posts, and opinions collected through surveys. We found that reminders about the audience of posts can prevent unintended disclosures without major burden; however, introducing a time delay before publishing users' posts can be perceived as both beneficial and annoying. On balance, some participants found the nudges helpful while others found them unnecessary or overly intrusive. We discuss implications and challenges for designing and evaluating systems to assist users with online disclosures.
Keywords: Behavioral bias; Online disclosure; Social media; Facebook; Nudge; Privacy; Regret; Soft-paternalism (ID#:14-3346)
URL: http://dl.acm.org/citation.cfm?id=2557413 or http://dx.doi.org/10.1145/2556288.2557413
Session: Risks and Security
Betrayed By Updates: How Negative Experiences Affect Future Security
Kami E. Vaniea, Emilee Rader, Rick Wash
Installing security-relevant software updates is one of the best computer protection mechanisms. However, users do not always choose to install updates. Through interviewing non-expert Windows users, we found that users frequently decide not to install future updates, regardless of whether they are important for security, after negative experiences with past updates. This means that even non-security updates (such as user interface changes) can impact the security of a computer. We discuss three themes impacting users' willingness to install updates: unexpected new features in an update, the difficulty of assessing whether an update is ``worth it'', and confusion about why an update is necessary.
Keywords: Software Updates; Human Factors; Security (ID#:14-3347)
URL: http://dl.acm.org/citation.cfm?id=2557275 or http://dx.doi.org/10.1145/2556288.2557275
Session: Authentication and Passwords
Can Long Passwords be Secure and Usable?
Richard Shay, Saranga Komanduri, Adam L. Durity, Phillip (Seyoung) Huh, Michelle L. Mazurek, Sean M. Segreti, Blase Ur, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor
To encourage strong passwords, system administrators employ password-composition policies, such as a traditional policy requiring that passwords have at least 8 characters from 4 character classes and pass a dictionary check. Recent research has suggested, however, that policies requiring longer passwords with fewer additional requirements can be more usable and in some cases more secure than this traditional policy. To explore long passwords in more detail, we conducted an online experiment with 8,143 participants. Using a cracking algorithm modified for longer passwords, we evaluate eight policies across a variety of metrics for strength and usability. Among the longer policies, we discover new evidence for a security/usability tradeoff, with none being strictly better than another on both dimensions. However, several policies are both more usable and more secure that the traditional policy we tested. Our analyses additionally reveal common patterns and strings found in cracked passwords. We discuss how system administrators can use these results to improve password-composition policies.
Keywords: Passwords; Password-composition policies; Security policy; Usable security; Authentication (ID#:14-3348)
URL: http://dl.acm.org/citation.cfm?id=2557377 or http://dx.doi.org/10.1145/2556288.2557377
An Implicit Author Verification System for Text Messages Based on Gesture Typing Biometrics
Ulrich Burgbacher, Klaus H. Hinrichs
Gesture typing is a popular text input method used on smartphones. Gesture keyboards are based on word gestures that subsequently trace all letters of a word on a virtual keyboard. Instead of tapping a word key by key, the user enters a word gesture with a single continuous stroke. In this paper, we introduce an implicit user verification approach for short text messages that are entered with a gesture keyboard. We utilize the way people interact with gesture keyboards to extract behavioral biometric features. We propose a proof-of-concept classification framework that learns the gesture typing behavior of a person and is able to decide whether a gestured message was written by the legitimate user or an imposter. Data collected from gesture keyboard users in a user study is used to assess the performance of the classification framework, demonstrating that the technique has considerable promise.
Keywords: Gesture keyboards; implicit authentication; behavioral biometrics; mobile phone security (ID#:14-3349)
URL: http://dl.acm.org/citation.cfm?id=2557346 or http://dx.doi.org/10.1145/2556288.2557346
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.