Visible to the public Biblio

Filters: Keyword is social robotics  [Clear All Filters]
2023-02-17
Rossi, Alessandra, Andriella, Antonio, Rossi, Silvia, Torras, Carme, Alenyà, Guillem.  2022.  Evaluating the Effect of Theory of Mind on People’s Trust in a Faulty Robot. 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :477–482.
The success of human-robot interaction is strongly affected by the people’s ability to infer others’ intentions and behaviours, and the level of people’s trust that others will abide by their same principles and social conventions to achieve a common goal. The ability of understanding and reasoning about other agents’ mental states is known as Theory of Mind (ToM). ToM and trust, therefore, are key factors in the positive outcome of human-robot interaction. We believe that a robot endowed with a ToM is able to gain people’s trust, even when this may occasionally make errors.In this work, we present a user study in the field in which participants (N=123) interacted with a robot that may or may not have a ToM, and may or may not exhibit erroneous behaviour. Our findings indicate that a robot with ToM is perceived as more reliable, and they trusted it more than a robot without a ToM even when the robot made errors. Finally, ToM results to be a key driver for tuning people’s trust in the robot even when the initial condition of the interaction changed (i.e., loss and regain of trust in a longer relationship).
ISSN: 1944-9437
2020-12-01
Ullman, D., Malle, B. F..  2019.  Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :618—619.

Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.