Visible to the public Biblio

Filters: Keyword is human-automation trust  [Clear All Filters]
2020-09-21
Razin, Yosef, Feigh, Karen.  2019.  Toward Interactional Trust for Humans and Automation: Extending Interdependence. 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1348–1355.
Trust in human-automation interaction is increasingly imperative as AI and robots become ubiquitous at home, school, and work. Interdependence theory allows for the identification of one-on-one interactions that require trust by analyzing the structure of the potential outcomes. This paper synthesizes multiple, formerly disparate research approaches by extending Interdependence theory to create a unified framework for outcome-based trust in human-automation interaction. This framework quantitatively contextualizes validated empirical results from social psychology on relationship formation, stability, and betrayal. It also contributes insights into trust-related concepts, such as power and commitment, which help further our understanding of trustworthy system design. This new integrated interactional approach reveals how trust and trustworthiness machines from merely reliable tools to trusted teammates working hand-in-actuator toward an automated future.
2019-02-08
Jensen, Theodore, Albayram, Yusuf, Khan, Mohammad Maifi Hasan, Buck, Ross, Coman, Emil, Fahim, Md Abdullah Al.  2018.  Initial Trustworthiness Perceptions of a Drone System Based on Performance and Process Information. Proceedings of the 6th International Conference on Human-Agent Interaction. :229-237.

Prior work notes dispositional, learned, and situational aspects of trust in automation. However, no work has investigated the relative role of these factors in initial trust of an automated system. Moreover, trust in automation researchers often consider trust unidimensionally, whereas ability, integrity, and benevolence perceptions (i.e., trusting beliefs) may provide a more thorough understanding of trust dynamics. To investigate this, we recruited 163 participants on Amazon's Mechanical Turk (MTurk) and randomly assigned each to one of 4 videos describing a hypothetical drone system: one control, the others with additional system performance or process, or both types of information. Participants reported on trusting beliefs in the system, propensity to trust other people, risk-taking tendencies, and trust in the government law enforcement agency behind the system. We found that financial risk-taking tendencies influenced trusting beliefs. Also, those who received process information were likely to have higher integrity and ability beliefs than those not receiving process information, while those who received performance information were likely to have higher ability beliefs. Lastly, perceptions of structural assurance positively influenced all three trusting beliefs. Our findings suggest that a) users' risk-taking tendencies influence trustworthiness perceptions of systems, b) different types of information about a system have varied effects on the trustworthiness dimensions, and c) institutions play an important role in users' calibration of trust. Insights gained from this study can help design training materials and interfaces that improve user trust calibration in automated systems.