Visible to the public Biblio

Filters: Keyword is anthropomorphism  [Clear All Filters]
2022-06-09
Cohen, Myke C., Demir, Mustafa, Chiou, Erin K., Cooke, Nancy J..  2021.  The Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming. 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS). :1–6.
Trust in autonomous teammates has been shown to be a key factor in human-autonomy team (HAT) performance, and anthropomorphism is a closely related construct that is underexplored in HAT literature. This study investigates whether perceived anthropomorphism can be measured from team communication behaviors in a simulated remotely piloted aircraft system task environment, in which two humans in unique roles were asked to team with a synthetic (i.e., autonomous) pilot agent. We compared verbal and self-reported measures of anthropomorphism with team error handling performance and trust in the synthetic pilot. Results for this study show that trends in verbal anthropomorphism follow the same patterns expected from self-reported measures of anthropomorphism, with respect to fluctuations in trust resulting from autonomy failures.
2022-02-03
Esterwood, Connor, Robert, Lionel P..  2021.  Do You Still Trust Me? Human-Robot Trust Repair Strategies 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :183—188.
Trust is vital to promoting human and robot collaboration, but like human teammates, robots make mistakes that undermine trust. As a result, a human’s perception of his or her robot teammate’s trustworthiness can dramatically decrease [1], [2], [3], [4]. Trustworthiness consists of three distinct dimensions: ability (i.e. competency), benevolence (i.e. concern for the trustor) and integrity (i.e. honesty) [5], [6]. Taken together, decreases in trustworthiness decreases trust in the robot [7]. To address this, we conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment. Preliminary results of the first 164 participants (between 19 and 24 per cell) highlight which repair strategies are effective relative to ability, integrity and benevolence and the robot’s anthropomorphism. Overall, this paper contributes to the HRI trust repair literature.
2018-12-10
Ha, Taehyun, Lee, Sangwon, Kim, Sangyeon.  2018.  Designing Explainability of an Artificial Intelligence System. Proceedings of the Technology, Mind, and Society. :14:1–14:1.

Explainability and accuracy of the machine learning algorithms usually laid on a trade-off relationship. Several algorithms such as deep-learning artificial neural networks have high accuracy but low explainability. Since there were only limited ways to access the learning and prediction processes in algorithms, researchers and users were not able to understand how the results were given to them. However, a recent project, explainable artificial intelligence (XAI) by DARPA, showed that AI systems can be highly explainable but also accurate. Several technical reports of XAI suggested ways of extracting explainable features and their positive effects on users; the results showed that explainability of AI was helpful to make users understand and trust the system. However, only a few studies have addressed why the explainability can bring positive effects to users. We suggest theoretical reasons from the attribution theory and anthropomorphism studies. Trough a review, we develop three hypotheses: (1) causal attribution is a human nature and thus a system which provides casual explanation on their process will affect users to attribute the result of system; (2) Based on the attribution results, users will perceive the system as human-like and which will be a motivation of anthropomorphism; (3) The system will be perceived by the users through the anthropomorphism. We provide a research framework for designing causal explainability of an AI system and discuss the expected results of the research.

2018-05-30
Ghazali, Aimi Shazwani, Ham, Jaap, Barakova, Emilia, Markopoulos, Panos.  2017.  The Influence of Social Cues and Controlling Language on Agent's Expertise, Sociability, and Trustworthiness. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. :125–126.

For optimal human-robot interaction, understanding the determinants and components of anthropomorphism is crucial. This research assessed the influence of an agent's social cues and controlling language use on user's perceptions of the agent's expertise, sociability, and trustworthiness. In a game context, the agent attempted to persuade users to modify their choices using high or low controlling language and using different levels of social cues (advice with text-only with no robot embodiment as the agent, a robot with elementary social cues, and a robot with advanced social cues). As expected, low controlling language lead to higher perceived anthropomorphism, while the robotic agent with the most social cues was selected as the most expert advisor and the non-social agent as the most trusted advisor.