Biblio
Conflicts may arise at any time during military debriefing meetings, especially in high intensity deployed settings. When such conflicts arise, it takes time to get everyone back into a receptive state of mind so that they engage in reflective discussion rather than unproductive arguing. It has been proposed by some that the use of social robots equipped with social abilities such as emotion regulation through rapport building may help to deescalate these situations to facilitate critical operational decisions. However, in military settings, the same AI agent used in the pre-brief of a mission may not be the same one used in the debrief. The purpose of this study was to determine whether a brief rapport-building session with a social robot could create a connection between a human and a robot agent, and whether consistency in the embodiment of the robot agent was necessary for maintaining this connection once formed. We report the results of a pilot study conducted at the United States Air Force Academy which simulated a military mission (i.e., Gravity and Strike). Participants' connection with the agent, sense of trust, and overall likeability revealed that early rapport building can be beneficial for military missions.
Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.
This research used an Autonomous Security Robot (ASR) scenario to examine public reactions to a robot that possesses the authority and capability to inflict harm on a human. Individual differences in terms of personality and Perfect Automation Schema (PAS) were examined as predictors of trust in the ASR. Participants (N=316) from Amazon Mechanical Turk (MTurk) rated their trust of the ASR and desire to use ASRs in public and military contexts following a 2-minute video depicting the robot interacting with three research confederates. The video showed the robot using force against one of the three confederates with a non-lethal device. Results demonstrated that individual differences factors were related to trust and desired use of the ASR. Agreeableness and both facets of the PAS (high expectations and all-or-none beliefs) demonstrated unique associations with trust using multiple regression techniques. Agreeableness, intellect, and high expectations were uniquely related to desired use for both public and military domains. This study showed that individual differences influence trust and one's desired use of ASRs, demonstrating that societal reactions to ASRs may be subject to variation among individuals.
Trust is a key element for successful human-robot interaction. One challenging problem in this domain is the issue of how to construct a formulation that optimally models this trust phenomenon. This paper presents a framework for modeling human-robot trust based on representing the human decision-making process as a formulation based on trust states. Using this formulation, we then discuss a generalized model of human-robot trust based on Hidden Markov Models and Logistic Regression. The proposed approach is validated on datasets collected from two different human subject studies in which the human is provided the ability to take advice from a robot. Both experimental scenarios were time-sensitive, in that a decision had to be made by the human in a limited time period, but each scenario featured different levels of cognitive load. The experimental results demonstrate that the proposed formulation can be utilized to model trust, in which the system can predict whether the human will decide to take advice (or not) from the robot. It was found that our prediction performance degrades after the robot made a mistake. The validation of this approach on two scenarios implies that this model can be applied to other interactive scenarios as long as the interaction dynamics fits into the proposed formulation. Directions for future improvements are discussed.
The current study explored the influence of trust and distrust behaviors on performance, process, and purpose (trustworthiness) perceptions over time when participants were paired with a robot partner. We examined the changes in trustworthiness perceptions after trust violations and trust repair after those violations. Results indicated performance, process, and purpose perceptions were all affected by trust violations, but perceptions of process and purpose decreased more than performance following a distrust behavior. Similarly, trust repair was achieved in performance perceptions, but trust repair in perceived process and purpose was absent. When a trust violation occurred, process and purpose perceptions deteriorated and failed to recover from the violation. In addition, the trust violation resulted in untrustworthy perceptions of the robot. In contrast, trust violations decreased partner performance perceptions, and subsequent trust behaviors resulted in a trust repair. These findings suggest that people are more sensitive to distrust behaviors in their perceptions of process and purpose than they are in performance perceptions.
As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people's trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. We conducted a between-subjects study where participants' trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants' choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.
With self-driving cars making their way on to our roads, we ask not what it would take for them to gain acceptance among consumers, but what impact they may have on other drivers. How they will be perceived and whether they will be trusted will likely have a major effect on traffic flow and vehicular safety. This work first undertakes an exploratory factor analysis to validate a trust scale for human-robot interaction and shows how previously validated metrics and general trust theory support a more complete model of trust that has increased applicability in the driving domain. We experimentally test this expanded model in the context of human-automation interaction during simulated driving, revealing how using these dimensions uncovers significant biases within human-robot trust that may have particularly deleterious effects when it comes to sharing our future roads with automated vehicles.
Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.
Embodiment of actions and tasks has typically been analyzed from the robot's perspective where the robot's embodiment helps develop and maintain trust. However, we ask a similar question looking at the interaction from the human perspective. Embodied cognition has been shown in the cognitive science literature to produce increased social empathy and cooperation. To understand how human embodiment can help develop and increase trust in human-robot interactions, we created conducted a study where participants were tasked with memorizing greek letters associated with dance motions with the help of a humanoid robot. Participants either performed the dance motion or utilized a touch screen during the interaction. The results showed that participants' trust in the robot increased at a higher rate during human embodiment of motions as opposed to utilizing a touch screen device.
Robots operating alongside humans in field environments have the potential to greatly increase the situational awareness of their human teammates. A significant challenge, however, is the efficient conveyance of what the robot perceives to the human in order to achieve improved situational awareness. We believe augmented reality (AR), which allows a human to simultaneously perceive the real world and digital information situated virtually in the real world, has the potential to address this issue. Motivated by the emerging prevalence of practical human-wearable AR devices, we present a system that enables a robot to perform cooperative search with a human teammate, where the robot can both share search results and assist the human teammate in navigation to the search target. We demonstrate this ability in a search task in an uninstrumented environment where the robot identifies and localizes targets and provides navigation direction via AR to bring the human to the correct target.
In medical human-robot interactions, trust plays an important role since for patients there may be more at stake than during other kinds of encounters with robots. In the current study, we address issues of trust in the interaction with a prototype of a therapeutic robot, the Universal RoboTrainer, in which the therapist records patient-specific tasks for the patient by means of kinesthetic guidance of the patients arm, which is connected to the robot. We carried out a user study with twelve pairs of participants who collaborate on recording a training program on the robot. We examine a) the degree with which participants identify the situation as uncomfortable or distressing, b) participants' own strategies to mitigate that stress, c) the degree to which the robot is held responsible for the problems occurring and the amount of agency ascribed to it, and d) when usability issues arise, what effect these have on participants' trust. We find signs of distress mostly in contexts with usability issues, as well as many verbal and kinesthetic mitigation strategies intuitively employed by the participants. Recommendations for robots to increase users' trust in kinesthetic interactions include the timely production of verbal cues that continuously confirm that everything is alright as well as increased contingency in the presentation of strategies for recovering from usability issues arising.
In this paper, we study trust-related human factors in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We compare three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual LOA, the human operator chooses headings for a flocking swarm, issuing new headings as needed. In the fully autonomous LOA, the swarm is redirected automatically by changing headings using a search algorithm. In the mixed-initiative LOA, if performance declines, control is switched from human to swarm or swarm to human. The result of this work extends the current knowledge on human factors in swarm supervisory control. Specifically, the finding that the relationship between trust and performance improved for passively monitoring operators (i.e., improved situation awareness in higher LOAs) is particularly novel in its contradiction of earlier work. We also discover that operators switch the degree of autonomy when their trust in the swarm system is low. Last, our analysis shows that operator's preference for a lower LOA is confirmed for a new domain of swarm control.
The presence of robots is becoming more apparent as technology progresses and the market focus transitions from smart phones to robotic personal assistants such as those provided by Amazon and Google. The integration of robots in our societies is an inevitable tendency in which robots in many forms and with many functionalities will provide services to humans. This calls for an understanding of how humans are affected by both the presence of and the reliance on robots to perform services for them. In this paper we explore the effects that robots have on humans when a service is performed on request. We expose three groups of human participants to three levels of service completion performed by robots. We record and analyse human perceptions such as propensity to trust, competency, responsiveness, sociability, and team work ability. Our results demonstrate that humans tend to trust robots and are more willing to interact with them when they autonomously recover from failure by requesting help from other robots to fulfil their service. This supports the view that autonomy and team working capabilities must be brought into robots in an effort to strengthen trust in robots performing a service.
A recent study featuring a new kind of care robot indicated that participants expect a robot's ethical decision-making to be transparent to develop trust, even though the same type of `inspection of thoughts' isn't expected of a human carer. At first glance, this might suggest that robot transparency mechanisms are required for users to develop trust in robot-made ethical decisions. But the participants were found to desire transparency only when they didn't know the specifics of a human-robot social interaction. Humans trust others without observing their thoughts, which implies other means of determining trustworthiness. The study reported here suggests that the method is social interaction and observation, signifying that trust is a social construct. Moreover, that `social determinants of trust' are the transparent elements. This socially determined behaviour draws on notions of virtue ethics. If a caregiver (nurse or robot) consistently provides good, ethical care, then patients can trust that caregiver to do so often. The same social determinants may apply to care robots and thus it ought to be possible to trust them without the ability to see their thoughts. This study suggests why transparency mechanisms may not be effective in helping to develop trust in care robot ethical decision-making. It suggests that roboticists need to build sociable elements into care robots to help patients to develop patient trust in the care robot's ethical decision-making.
With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.
When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.
With recent advances in robotics, it is expected that robots will become increasingly common in human environments, such as in the home and workplaces. Robots will assist and collaborate with humans on a variety of tasks. During these collaborations, it is inevitable that disagreements in decisions would occur between humans and robots. Among factors that lead to which decision a human should ultimately follow, theirs or the robot, trust is a critical factor to consider. This study aims to investigate individuals' behaviors and aspects of trust in a problem-solving situation in which a decision must be made in a bounded amount of time. A between-subject experiment was conducted with 100 participants. With the assistance of a humanoid robot, participants were requested to tackle a cognitive-based task within a given time frame. Each participant was randomly assigned to one of the following initial conditions: 1) a working robot in which the robot provided a correct answer or 2) a faulty robot in which the robot provided an incorrect answer. Impacts of the faulty robot behavior on participant's decision to follow the robot's suggested answer were analyzed. Survey responses about trust were collected after interacting with the robot. Results indicated that the first impression has a significant impact on participant's behavior of trusting a robot's advice during a disagreement. In addition, this study discovered evidence supporting that individuals still have trust in a malfunctioning robot even after they have observed a robot's faulty behavior.