Visible to the public Biblio

Filters: Keyword is human-automation interaction  [Clear All Filters]
2021-02-03
Razin, Y. S., Feigh, K. M..  2020.  Hitting the Road: Exploring Human-Robot Trust for Self-Driving Vehicles. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—6.

With self-driving cars making their way on to our roads, we ask not what it would take for them to gain acceptance among consumers, but what impact they may have on other drivers. How they will be perceived and whether they will be trusted will likely have a major effect on traffic flow and vehicular safety. This work first undertakes an exploratory factor analysis to validate a trust scale for human-robot interaction and shows how previously validated metrics and general trust theory support a more complete model of trust that has increased applicability in the driving domain. We experimentally test this expanded model in the context of human-automation interaction during simulated driving, revealing how using these dimensions uncovers significant biases within human-robot trust that may have particularly deleterious effects when it comes to sharing our future roads with automated vehicles.

2020-09-21
Razin, Yosef, Feigh, Karen.  2019.  Toward Interactional Trust for Humans and Automation: Extending Interdependence. 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1348–1355.
Trust in human-automation interaction is increasingly imperative as AI and robots become ubiquitous at home, school, and work. Interdependence theory allows for the identification of one-on-one interactions that require trust by analyzing the structure of the potential outcomes. This paper synthesizes multiple, formerly disparate research approaches by extending Interdependence theory to create a unified framework for outcome-based trust in human-automation interaction. This framework quantitatively contextualizes validated empirical results from social psychology on relationship formation, stability, and betrayal. It also contributes insights into trust-related concepts, such as power and commitment, which help further our understanding of trustworthy system design. This new integrated interactional approach reveals how trust and trustworthiness machines from merely reliable tools to trusted teammates working hand-in-actuator toward an automated future.