Robots that Take Advantage of Human Trust
Title | Robots that Take Advantage of Human Trust |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Losey, D. P., Sadigh, D. |
Conference Name | 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |
Date Published | Nov. 2019 |
Publisher | IEEE |
ISBN Number | 978-1-7281-4004-9 |
Keywords | communicative robot behavior, game theory, Human Behavior, human factors, human trust, human-robot interaction, intelligent robots, learning (artificial intelligence), mobile robots, optimal actions, optimisation, pubcrawl, rational-intelligent robots, resilience, Resiliency, robot actions, Robot Trust, robust trust, uncertain human |
Abstract | Humans often assume that robots are rational. We believe robots take optimal actions given their objective; hence, when we are uncertain about what the robot's objective is, we interpret the robot's actions as optimal with respect to our estimate of its objective. This approach makes sense when robots straightforwardly optimize their objective, and enables humans to learn what the robot is trying to achieve. However, our insight is that-when robots are aware that humans learn by trusting that the robot actions are rational-intelligent robots do not act as the human expects; instead, they take advantage of the human's trust, and exploit this trust to more efficiently optimize their own objective. In this paper, we formally model instances of human-robot interaction (HRI) where the human does not know the robot's objective using a two-player game. We formulate different ways in which the robot can model the uncertain human, and compare solutions of this game when the robot has conservative, optimistic, rational, and trusting human models. In an offline linear-quadratic case study and a real-time user study, we show that trusting human models can naturally lead to communicative robot behavior, which influences end-users and increases their involvement. |
URL | https://ieeexplore.ieee.org/document/8968564 |
DOI | 10.1109/IROS40897.2019.8968564 |
Citation Key | losey_robots_2019 |
- optimal actions
- uncertain human
- robust trust
- Robot Trust
- robot actions
- Resiliency
- resilience
- rational-intelligent robots
- pubcrawl
- optimisation
- communicative robot behavior
- mobile robots
- learning (artificial intelligence)
- intelligent robots
- human-robot interaction
- human trust
- Human Factors
- Human behavior
- game theory