Visible to the public Biblio

Filters: Keyword is User perception  [Clear All Filters]
2022-12-20
von Zezschwitz, Emanuel, Chen, Serena, Stark, Emily.  2022.  "It builds trust with the customers" - Exploring User Perceptions of the Padlock Icon in Browser UI. 2022 IEEE Security and Privacy Workshops (SPW). :44–50.
We performed a large-scale online survey (n=1,880) to study the padlock icon, an established security indicator in web browsers that denotes connection security through HTTPS. In this paper, we evaluate users’ understanding of the padlock icon, and how removing or replacing it might influence their expectations and decisions. We found that the majority of respondents (89%) had misconceptions about the padlock’s meaning. While only a minority (23%-44%) referred to the padlock icon at all when asked to evaluate trustworthiness, these padlock-aware users reported that they would be deterred from a hypothetical shopping transaction when the padlock icon was absent. These users were reassured after seeing secondary UI surfaces (i.e., Chrome Page Info) where more verbose information about connection security was present.We conclude that the padlock icon, displayed by browsers in the address bar, is still misunderstood by many users. The padlock icon guarantees connection security, but is often perceived to indicate the general privacy, security, and trustworthiness of a website. We argue that communicating connection security precisely and clearly is likely to be more effective through secondary UI, where there is more surface area for content. We hope that this paper boosts the discussion about the benefits and drawbacks of showing passive security indicators in the browser UI.
ISSN: 2770-8411
2021-02-01
Ng, M., Coopamootoo, K. P. L., Toreini, E., Aitken, M., Elliot, K., Moorsel, A. van.  2020.  Simulating the Effects of Social Presence on Trust, Privacy Concerns Usage Intentions in Automated Bots for Finance. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :190–199.
FinBots are chatbots built on automated decision technology, aimed to facilitate accessible banking and to support customers in making financial decisions. Chatbots are increasing in prevalence, sometimes even equipped to mimic human social rules, expectations and norms, decreasing the necessity for human-to-human interaction. As banks and financial advisory platforms move towards creating bots that enhance the current state of consumer trust and adoption rates, we investigated the effects of chatbot vignettes with and without socio-emotional features on intention to use the chatbot for financial support purposes. We conducted a between-subject online experiment with N = 410 participants. Participants in the control group were provided with a vignette describing a secure and reliable chatbot called XRO23, whereas participants in the experimental group were presented with a vignette describing a secure and reliable chatbot that is more human-like and named Emma. We found that Vignette Emma did not increase participants' trust levels nor lowered their privacy concerns even though it increased perception of social presence. However, we found that intention to use the presented chatbot for financial support was positively influenced by perceived humanness and trust in the bot. Participants were also more willing to share financially-sensitive information such as account number, sort code and payments information to XRO23 compared to Emma - revealing a preference for a technical and mechanical FinBot in information sharing. Overall, this research contributes to our understanding of the intention to use chatbots with different features as financial technology, in particular that socio-emotional support may not be favoured when designed independently of financial function.
2020-04-13
Dechand, Sergej, Naiakshina, Alena, Danilova, Anastasia, Smith, Matthew.  2019.  In Encryption We Don’t Trust: The Effect of End-to-End Encryption to the Masses on User Perception. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :401–415.
With WhatsApp's adoption of the Signal Protocol as its default, end-to-end encryption by the masses happened almost overnight. Unlike iMessage, WhatsApp notifies users that encryption is enabled, explicitly informing users about improved privacy. This rare feature gives us an opportunity to study people's understandings and perceptions of secure messaging pre-and post-mass messenger encryption (pre/post-MME). To study changes in perceptions, we compared the results of two mental models studies: one conducted in 2015 pre-MME and one in 2017 post-MME. Our primary finding is that users do not trust encryption as currently offered. When asked about encryption in the study, most stated that they had heard of encryption, but only a few understood the implications, even on a high level. Their consensus view was that no technical solution to stop skilled attackers from getting their data exists. Even with a major development, such as WhatsApp rolling out end-to-end encryption, people still do not feel well protected by their technology. Surprisingly, despite WhatsApp's end-to-end security info messages and the high media attention, the majority of the participants were not even aware of encryption. Most participants had an almost correct threat model, but don't believe that there is a technical solution to stop knowledgeable attackers to read their messages. Using technology made them feel vulnerable.
2018-12-10
Ha, Taehyun, Lee, Sangwon, Kim, Sangyeon.  2018.  Designing Explainability of an Artificial Intelligence System. Proceedings of the Technology, Mind, and Society. :14:1–14:1.

Explainability and accuracy of the machine learning algorithms usually laid on a trade-off relationship. Several algorithms such as deep-learning artificial neural networks have high accuracy but low explainability. Since there were only limited ways to access the learning and prediction processes in algorithms, researchers and users were not able to understand how the results were given to them. However, a recent project, explainable artificial intelligence (XAI) by DARPA, showed that AI systems can be highly explainable but also accurate. Several technical reports of XAI suggested ways of extracting explainable features and their positive effects on users; the results showed that explainability of AI was helpful to make users understand and trust the system. However, only a few studies have addressed why the explainability can bring positive effects to users. We suggest theoretical reasons from the attribution theory and anthropomorphism studies. Trough a review, we develop three hypotheses: (1) causal attribution is a human nature and thus a system which provides casual explanation on their process will affect users to attribute the result of system; (2) Based on the attribution results, users will perceive the system as human-like and which will be a motivation of anthropomorphism; (3) The system will be perceived by the users through the anthropomorphism. We provide a research framework for designing causal explainability of an AI system and discuss the expected results of the research.