Visible to the public Biblio

Filters: Author is Morris, Richard  [Clear All Filters]
2017-05-16
Conway, Dan, Chen, Fang, Yu, Kun, Zhou, Jianlong, Morris, Richard.  2016.  Misplaced Trust: A Bias in Human-Machine Trust Attribution – In Contradiction to Learning Theory. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :3035–3041.

Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.