Visible to the public Biblio

Filters: Author is Esterwood, Connor  [Clear All Filters]
2023-02-17
Esterwood, Connor, Robert, Lionel P..  2022.  Having the Right Attitude: How Attitude Impacts Trust Repair in Human—Robot Interaction. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :332–341.
Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting human-robot collaboration as it is in promoting human-human collaboration. In addition, individuals can signif-icantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individual's attitude.
2022-02-03
Esterwood, Connor, Robert, Lionel P..  2021.  Do You Still Trust Me? Human-Robot Trust Repair Strategies 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :183—188.
Trust is vital to promoting human and robot collaboration, but like human teammates, robots make mistakes that undermine trust. As a result, a human’s perception of his or her robot teammate’s trustworthiness can dramatically decrease [1], [2], [3], [4]. Trustworthiness consists of three distinct dimensions: ability (i.e. competency), benevolence (i.e. concern for the trustor) and integrity (i.e. honesty) [5], [6]. Taken together, decreases in trustworthiness decreases trust in the robot [7]. To address this, we conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment. Preliminary results of the first 164 participants (between 19 and 24 per cell) highlight which repair strategies are effective relative to ability, integrity and benevolence and the robot’s anthropomorphism. Overall, this paper contributes to the HRI trust repair literature.