Misplaced Trust: A Bias in Human-Machine Trust Attribution – In Contradiction to Learning Theory
Title | Misplaced Trust: A Bias in Human-Machine Trust Attribution – In Contradiction to Learning Theory |
Publication Type | Conference Paper |
Year of Publication | 2016 |
Authors | Conway, Dan, Chen, Fang, Yu, Kun, Zhou, Jianlong, Morris, Richard |
Conference Name | Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems |
Date Published | May 2016 |
Publisher | ACM |
Conference Location | New York, NY, USA |
ISBN Number | 978-1-4503-4082-3 |
Keywords | decision making, HCI, Human Behavior, human trust, learning, pubcrawl, Trust, Uncertainty |
Abstract | Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI. |
URL | https://dl.acm.org/doi/10.1145/2851581.2892433 |
DOI | 10.1145/2851581.2892433 |
Citation Key | conway_misplaced_2016 |