Biblio

Filters: Author is Howard, A.  [Clear All Filters]
2021-02-03
Ye, S., Feigh, K., Howard, A..  2020.  Learning in Motion: Dynamic Interactions for Increased Trust in Human-Robot Interaction Games. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :1186—1189.

Embodiment of actions and tasks has typically been analyzed from the robot's perspective where the robot's embodiment helps develop and maintain trust. However, we ask a similar question looking at the interaction from the human perspective. Embodied cognition has been shown in the cognitive science literature to produce increased social empathy and cooperation. To understand how human embodiment can help develop and increase trust in human-robot interactions, we created conducted a study where participants were tasked with memorizing greek letters associated with dance motions with the help of a humanoid robot. Participants either performed the dance motion or utilized a touch screen during the interaction. The results showed that participants' trust in the robot increased at a higher rate during human embodiment of motions as opposed to utilizing a touch screen device.

Xu, J., Howard, A..  2020.  Would you Take Advice from a Robot? Developing a Framework for Inferring Human-Robot Trust in Time-Sensitive Scenarios 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :814—820.

Trust is a key element for successful human-robot interaction. One challenging problem in this domain is the issue of how to construct a formulation that optimally models this trust phenomenon. This paper presents a framework for modeling human-robot trust based on representing the human decision-making process as a formulation based on trust states. Using this formulation, we then discuss a generalized model of human-robot trust based on Hidden Markov Models and Logistic Regression. The proposed approach is validated on datasets collected from two different human subject studies in which the human is provided the ability to take advice from a robot. Both experimental scenarios were time-sensitive, in that a decision had to be made by the human in a limited time period, but each scenario featured different levels of cognitive load. The experimental results demonstrate that the proposed formulation can be utilized to model trust, in which the system can predict whether the human will decide to take advice (or not) from the robot. It was found that our prediction performance degrades after the robot made a mistake. The validation of this approach on two scenarios implies that this model can be applied to other interactive scenarios as long as the interaction dynamics fits into the proposed formulation. Directions for future improvements are discussed.

Xu, J., Howard, A..  2020.  How much do you Trust your Self-Driving Car? Exploring Human-Robot Trust in High-Risk Scenarios 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4273—4280.

Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.

2020-12-01
Xu, J., Howard, A..  2018.  The Impact of First Impressions on Human- Robot Trust During Problem-Solving Scenarios. 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :435—441.

With recent advances in robotics, it is expected that robots will become increasingly common in human environments, such as in the home and workplaces. Robots will assist and collaborate with humans on a variety of tasks. During these collaborations, it is inevitable that disagreements in decisions would occur between humans and robots. Among factors that lead to which decision a human should ultimately follow, theirs or the robot, trust is a critical factor to consider. This study aims to investigate individuals' behaviors and aspects of trust in a problem-solving situation in which a decision must be made in a bounded amount of time. A between-subject experiment was conducted with 100 participants. With the assistance of a humanoid robot, participants were requested to tackle a cognitive-based task within a given time frame. Each participant was randomly assigned to one of the following initial conditions: 1) a working robot in which the robot provided a correct answer or 2) a faulty robot in which the robot provided an incorrect answer. Impacts of the faulty robot behavior on participant's decision to follow the robot's suggested answer were analyzed. Survey responses about trust were collected after interacting with the robot. Results indicated that the first impression has a significant impact on participant's behavior of trusting a robot's advice during a disagreement. In addition, this study discovered evidence supporting that individuals still have trust in a malfunctioning robot even after they have observed a robot's faulty behavior.

Xu, J., Bryant, D. G., Howard, A..  2018.  Would You Trust a Robot Therapist? Validating the Equivalency of Trust in Human-Robot Healthcare Scenarios 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :442—447.

With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.