Visible to the public Biblio

Filters: Keyword is HRI  [Clear All Filters]
2021-02-03
Alarcon, G. M., Gibson, A. M., Jessup, S. A..  2020.  Trust Repair in Performance, Process, and Purpose Factors of Human-Robot Trust. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—6.

The current study explored the influence of trust and distrust behaviors on performance, process, and purpose (trustworthiness) perceptions over time when participants were paired with a robot partner. We examined the changes in trustworthiness perceptions after trust violations and trust repair after those violations. Results indicated performance, process, and purpose perceptions were all affected by trust violations, but perceptions of process and purpose decreased more than performance following a distrust behavior. Similarly, trust repair was achieved in performance perceptions, but trust repair in perceived process and purpose was absent. When a trust violation occurred, process and purpose perceptions deteriorated and failed to recover from the violation. In addition, the trust violation resulted in untrustworthy perceptions of the robot. In contrast, trust violations decreased partner performance perceptions, and subsequent trust behaviors resulted in a trust repair. These findings suggest that people are more sensitive to distrust behaviors in their perceptions of process and purpose than they are in performance perceptions.

Mou, W., Ruocco, M., Zanatto, D., Cangelosi, A..  2020.  When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :956—962.

Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.