Visible to the public Biblio

Filters: Keyword is social cues  [Clear All Filters]
2021-02-03
Rossi, A., Dautenhahn, K., Koay, K. Lee, Walters, M. L..  2020.  How Social Robots Influence People’s Trust in Critical Situations. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :1020—1025.

As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people's trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. We conducted a between-subjects study where participants' trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants' choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.

2018-05-30
Ghazali, Aimi Shazwani, Ham, Jaap, Barakova, Emilia, Markopoulos, Panos.  2017.  The Influence of Social Cues and Controlling Language on Agent's Expertise, Sociability, and Trustworthiness. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. :125–126.

For optimal human-robot interaction, understanding the determinants and components of anthropomorphism is crucial. This research assessed the influence of an agent's social cues and controlling language use on user's perceptions of the agent's expertise, sociability, and trustworthiness. In a game context, the agent attempted to persuade users to modify their choices using high or low controlling language and using different levels of social cues (advice with text-only with no robot embodiment as the agent, a robot with elementary social cues, and a robot with advanced social cues). As expected, low controlling language lead to higher perceived anthropomorphism, while the robotic agent with the most social cues was selected as the most expert advisor and the non-social agent as the most trusted advisor.