Visible to the public Biblio

Filters: Keyword is perceived trustworthiness  [Clear All Filters]
2020-12-01
Gao, Y., Sibirtseva, E., Castellano, G., Kragic, D..  2019.  Fast Adaptation with Meta-Reinforcement Learning for Trust Modelling in Human-Robot Interaction. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :305—312.

In socially assistive robotics, an important research area is the development of adaptation techniques and their effect on human-robot interaction. We present a meta-learning based policy gradient method for addressing the problem of adaptation in human-robot interaction and also investigate its role as a mechanism for trust modelling. By building an escape room scenario in mixed reality with a robot, we test our hypothesis that bi-directional trust can be influenced by different adaptation algorithms. We found that our proposed model increased the perceived trustworthiness of the robot and influenced the dynamics of gaining human's trust. Additionally, participants evaluated that the robot perceived them as more trustworthy during the interactions with the meta-learning based adaptation compared to the previously studied statistical adaptation model.

2019-02-08
Jensen, Theodore, Albayram, Yusuf, Khan, Mohammad Maifi Hasan, Buck, Ross, Coman, Emil, Fahim, Md Abdullah Al.  2018.  Initial Trustworthiness Perceptions of a Drone System Based on Performance and Process Information. Proceedings of the 6th International Conference on Human-Agent Interaction. :229-237.

Prior work notes dispositional, learned, and situational aspects of trust in automation. However, no work has investigated the relative role of these factors in initial trust of an automated system. Moreover, trust in automation researchers often consider trust unidimensionally, whereas ability, integrity, and benevolence perceptions (i.e., trusting beliefs) may provide a more thorough understanding of trust dynamics. To investigate this, we recruited 163 participants on Amazon's Mechanical Turk (MTurk) and randomly assigned each to one of 4 videos describing a hypothetical drone system: one control, the others with additional system performance or process, or both types of information. Participants reported on trusting beliefs in the system, propensity to trust other people, risk-taking tendencies, and trust in the government law enforcement agency behind the system. We found that financial risk-taking tendencies influenced trusting beliefs. Also, those who received process information were likely to have higher integrity and ability beliefs than those not receiving process information, while those who received performance information were likely to have higher ability beliefs. Lastly, perceptions of structural assurance positively influenced all three trusting beliefs. Our findings suggest that a) users' risk-taking tendencies influence trustworthiness perceptions of systems, b) different types of information about a system have varied effects on the trustworthiness dimensions, and c) institutions play an important role in users' calibration of trust. Insights gained from this study can help design training materials and interfaces that improve user trust calibration in automated systems.