Visible to the public Biblio

Filters: Keyword is mental models  [Clear All Filters]
2021-03-29
Anell, S., Gröber, L., Krombholz, K..  2020.  End User and Expert Perceptions of Threats and Potential Countermeasures. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :230—239.

Experts often design security and privacy technology with specific use cases and threat models in mind. In practice however, end users are not aware of these threats and potential countermeasures. Furthermore, mis-conceptions about the benefits and limitations of security and privacy technology inhibit large-scale adoption by end users. In this paper, we address this challenge and contribute a qualitative study on end users' and security experts' perceptions of threat models and potential countermeasures. We follow an inductive research approach to explore perceptions and mental models of both security experts and end users. We conducted semi-structured interviews with 8 security experts and 13 end users. Our results suggest that in contrast to security experts, end users neglect acquaintances and friends as attackers in their threat models. Our findings highlight that experts value technical countermeasures whereas end users try to implement trust-based defensive methods.

2020-01-02
Muszynska, Maria, Michels, Denise, von Zezschwitz, Emanuel.  2018.  Not On My Phone: Exploring Users' Conception of Related Permissions. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. :LBW508:1–LBW508:6.

Many smartphone security mechanisms prompt users to decide on sensitive resource requests. This approach fails if corresponding implications are not understood. Prior work identified ineffective user interfaces as a cause for insufficient comprehension and proposed augmented dialogs. We hypothesize that, prior to interface-design, efficient security dialogs require an underlying permission model based on user demands. We believe, that only an implementation which corresponds to users\guillemotright mental models, in terms of the handling, granularity and grouping of permission requests, allows for informed decisions. In this work, we propose a study design which leverages materialization for the extraction of the mental models. We present preliminary results of three Focus Groups. The findings indicate that the materialization provided sufficient support for non-experts to understand and discuss this complex topic. In addition to this, the results indicate that current permission approaches do not match users\guillemotright demands for information and control.

2017-10-18
Luger, Ewa, Sellen, Abigail.  2016.  "Like Having a Really Bad PA": The Gulf Between User Expectation and Experience of Conversational Agents. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :5286–5297.

The past four years have seen the rise of conversational agents (CAs) in everyday life. Apple, Microsoft, Amazon, Google and Facebook have all embedded proprietary CAs within their software and, increasingly, conversation is becoming a key mode of human-computer interaction. Whilst we have long been familiar with the notion of computers that speak, the investigative concern within HCI has been upon multimodality rather than dialogue alone, and there is no sense of how such interfaces are used in everyday life. This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use. We find user expectations dramatically out of step with the operation of the systems, particularly in terms of known machine intelligence, system capability and goals. Using Norman's 'gulfs of execution and evaluation' [30] we consider the implications of these findings for the design of future systems.