Visible to the public Biblio

Filters: Author is Zhou, Jianlong  [Clear All Filters]
2017-10-13
Yu, Kun, Berkovsky, Shlomo, Conway, Dan, Taib, Ronnie, Zhou, Jianlong, Chen, Fang.  2016.  Trust and Reliance Based on System Accuracy. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. :223–227.

Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system's output serves as one of the inputs for the users' decision making processes. In this work, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly.

2017-08-02
Yu, Kun, Berkovsky, Shlomo, Conway, Dan, Taib, Ronnie, Zhou, Jianlong, Chen, Fang.  2016.  Trust and Reliance Based on System Accuracy. Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. :223–227.

Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system's output serves as one of the inputs for the users' decision making processes. In this work, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly.

2017-05-16
Conway, Dan, Chen, Fang, Yu, Kun, Zhou, Jianlong, Morris, Richard.  2016.  Misplaced Trust: A Bias in Human-Machine Trust Attribution – In Contradiction to Learning Theory. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :3035–3041.

Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.