Visible to the public Biblio

Filters: Keyword is Robot Trust  [Clear All Filters]
2023-02-17
Hannibal, Glenda, Dobrosovestnova, Anna, Weiss, Astrid.  2022.  Tolerating Untrustworthy Robots: Studying Human Vulnerability Experience within a Privacy Scenario for Trust in Robots. 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :821–828.
Focusing on human experience of vulnerability in everyday life interaction scenarios is still a novel approach. So far, only a proof-of-concept online study has been conducted, and to extend this work, we present a follow-up online study. We consider in more detail how human experience of vulnerability caused by a trust violation through a privacy breach affects trust ratings in an interaction scenario with the PEPPER robot assisting with clothes shopping. We report the results from 32 survey responses and 11 semi-structured interviews. Our findings reveal the existence of the privacy paradox also for studying trust in HRI, which is a common observation describing a discrepancy between the stated privacy concerns by people and their behavior to safeguard it. Moreover, we reflect that participants considered only the added value of utility and entertainment when deciding whether or not to interact with the robot again, but not the privacy breach. We conclude that people might tolerate an untrustworthy robot even when they are feeling vulnerable in the everyday life situation of clothes shopping.
ISSN: 1944-9437
Amaya-Mejía, Lina María, Duque-Suárez, Nicolás, Jaramillo-Ramírez, Daniel, Martinez, Carol.  2022.  Vision-Based Safety System for Barrierless Human-Robot Collaboration. 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :7331–7336.

Human safety has always been the main priority when working near an industrial robot. With the rise of Human-Robot Collaborative environments, physical barriers to avoiding collisions have been disappearing, increasing the risk of accidents and the need for solutions that ensure a safe Human-Robot Collaboration. This paper proposes a safety system that implements Speed and Separation Monitoring (SSM) type of operation. For this, safety zones are defined in the robot's workspace following current standards for industrial collaborative robots. A deep learning-based computer vision system detects, tracks, and estimates the 3D position of operators close to the robot. The robot control system receives the operator's 3D position and generates 3D representations of them in a simulation environment. Depending on the zone where the closest operator was detected, the robot stops or changes its operating speed. Three different operation modes in which the human and robot interact are presented. Results show that the vision-based system can correctly detect and classify in which safety zone an operator is located and that the different proposed operation modes ensure that the robot's reaction and stop time are within the required time limits to guarantee safety.

ISSN: 2153-0866

Esterwood, Connor, Robert, Lionel P..  2022.  Having the Right Attitude: How Attitude Impacts Trust Repair in Human—Robot Interaction. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :332–341.
Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting human-robot collaboration as it is in promoting human-human collaboration. In addition, individuals can signif-icantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individual's attitude.
Patel, Sabina M., Phillips, Elizabeth, Lazzara, Elizabeth H..  2022.  Updating the paradigm: Investigating the role of swift trust in human-robot teams. 2022 IEEE 3rd International Conference on Human-Machine Systems (ICHMS). :1–1.
With the influx of technology use and human-robot teams, it is important to understand how swift trust is developed within these teams. Given this influx, we plan to study how surface cues (i.e., observable characteristics) and imported information (i.e., knowledge from external sources or personal experiences) effect the development of swift trust. We hypothesize that human-like surface level cues and positive imported information will yield higher swift trust. These findings will help the assignment of human robot teams in the future.
Rossi, Alessandra, Andriella, Antonio, Rossi, Silvia, Torras, Carme, Alenyà, Guillem.  2022.  Evaluating the Effect of Theory of Mind on People’s Trust in a Faulty Robot. 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :477–482.
The success of human-robot interaction is strongly affected by the people’s ability to infer others’ intentions and behaviours, and the level of people’s trust that others will abide by their same principles and social conventions to achieve a common goal. The ability of understanding and reasoning about other agents’ mental states is known as Theory of Mind (ToM). ToM and trust, therefore, are key factors in the positive outcome of human-robot interaction. We believe that a robot endowed with a ToM is able to gain people’s trust, even when this may occasionally make errors.In this work, we present a user study in the field in which participants (N=123) interacted with a robot that may or may not have a ToM, and may or may not exhibit erroneous behaviour. Our findings indicate that a robot with ToM is perceived as more reliable, and they trusted it more than a robot without a ToM even when the robot made errors. Finally, ToM results to be a key driver for tuning people’s trust in the robot even when the initial condition of the interaction changed (i.e., loss and regain of trust in a longer relationship).
ISSN: 1944-9437
Babel, Franziska, Baumann, Martin.  2022.  Designing Psychological Conflict Resolution Strategies for Autonomous Service Robots. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :1146–1148.
As autonomous service robots will become increasingly ubiquitous in our daily lives, human-robot conflicts will become more likely when humans and robots share the same spaces and resources. This thesis investigates the conflict resolution of robots and humans in everyday conflicts in the domestic and public context. Hereby, the acceptability, trustworthiness, and effectiveness of verbal and non-verbal strategies for the robot to solve the conflict in its favor are evaluated. Based on the assumption of the Media Equation and CASA paradigm that people interact with computers as social actors, robot conflict resolution strategies from social psychology and human-machine interaction were derived. The effectiveness, acceptability, and trustworthiness of those strategies were evaluated in online, virtual reality, and laboratory experiments. Future work includes determining the psychological processes of human-robot conflict resolution in further experimental studies.
Schüle, Mareike, Kraus, Johannes Maria, Babel, Franziska, Reißner, Nadine.  2022.  Patients' Trust in Hospital Transport Robots: Evaluation of the Role of User Dispositions, Anxiety, and Robot Characteristics. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :246–255.
For designing the interaction with robots in healthcare scenarios, understanding how trust develops in such situations characterized by vulnerability and uncertainty is important. The goal of this study was to investigate how technology-related user dispositions, anxiety, and robot characteristics influence trust. A second goal was to substantiate the association between hospital patients' trust and their intention to use a transport robot. In an online study, patients, who were currently treated in hospitals, were introduced to the concept of a transport robot with both written and video-based material. Participants evaluated the robot several times. Technology-related user dispositions were found to be essentially associated with trust and the intention to use. Furthermore, hospital patients' anxiety was negatively associated with the intention to use. This relationship was mediated by trust. Moreover, no effects of the manipulated robot characteristics were found. In conclusion, for a successful implementation of robots in hospital settings patients' individual prior learning history - e.g., in terms of existing robot attitudes - and anxiety levels should be considered during the introduction and implementation phase.
Tilloo, Pallavi, Parron, Jesse, Obidat, Omar, Zhu, Michelle, Wang, Weitian.  2022.  A POMDP-based Robot-Human Trust Model for Human-Robot Collaboration. 2022 12th International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). :1009–1014.
Trust is a cognitive ability that can be dependent on behavioral consistency. In this paper, a partially observable Markov Decision Process (POMDP)-based computational robot-human trust model is proposed for hand-over tasks in human-robot collaborative contexts. The robot's trust in its human partner is evaluated based on the human behavior estimates and object detection during the hand-over task. The human-robot hand-over process is parameterized as a partially observable Markov Decision Process. The proposed approach is verified in real-world human-robot collaborative tasks. Results show that our approach can be successfully applied to human-robot hand-over tasks to achieve high efficiency, reduce redundant robot movements, and realize predictability and mutual understanding of the task.
ISSN: 2642-6633
Babel, Franziska, Hock, Philipp, Kraus, Johannes, Baumann, Martin.  2022.  It Will Not Take Long! Longitudinal Effects of Robot Conflict Resolution Strategies on Compliance, Acceptance and Trust. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :225–235.
Domestic service robots become increasingly prevalent and autonomous, which will make task priority conflicts more likely. The robot must be able to effectively and appropriately negotiate to gain priority if necessary. In previous human-robot interaction (HRI) studies, imitating human negotiation behavior was effective but long-term effects have not been studied. Filling this research gap, an interactive online study (\$N=103\$) with two sessions and six trials was conducted. In a conflict scenario, participants repeatedly interacted with a domestic service robot that applied three different conflict resolution strategies: appeal, command, diminution of request. The second manipulation was reinforcement (thanking) of compliance behavior (yes/no). This led to a 3×2×6 mixed-subject design. User acceptance, trust, user compliance to the robot, and self-reported compliance to a household member were assessed. The diminution of a request combined with positive reinforcement was the most effective strategy and perceived trustworthiness increased significantly over time. For this strategy only, self-reported compliance rates to the human and the robot were similar. Therefore, applying this strategy potentially seems to make a robot equally effective as a human requester. This paper contributes to the design of acceptable and effective robot conflict resolution strategies for long-term use.
Maehigashi, Akihiro.  2022.  The Nature of Trust in Communication Robots: Through Comparison with Trusts in Other People and AI systems. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :900–903.
In this study, the nature of human trust in communication robots was experimentally investigated comparing with trusts in other people and artificial intelligence (AI) systems. The results of the experiment showed that trust in robots is basically similar to that in AI systems in a calculation task where a single solution can be obtained and is partly similar to that in other people in an emotion recognition task where multiple interpretations can be acceptable. This study will contribute to designing a smooth interaction between people and communication robots.
2022-02-03
Xu, Chengtao, Song, Houbing.  2021.  Mixed Initiative Balance of Human-Swarm Teaming in Surveillance via Reinforcement learning. 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC). :1—10.
Human-machine teaming (HMT) operates in a context defined by the mission. Varying from the complexity and disturbance in the cooperation between humans and machines, a single machine has difficulty handling work with humans in the scales of efficiency and workload. Swarm of machines provides a more feasible solution in such a mission. Human-swarm teaming (HST) extends the concept of HMT in the mission, such as persistent surveillance, search-and-rescue, warfare. Bringing the concept of HST faces several scientific challenges. For example, the strategies of allocation on the high-level decision making. Here, human usually plays the supervisory or decision making role. Performance of such fixed structure of HST in actual mission operation could be affected by the supervisor’s status from many aspects, which could be considered in three general parts: workload, situational awareness, and trust towards the robot swarm teammate and mission performance. Besides, the complexity of a single human operator in accessing multiple machine agents increases the work burdens. An interface between swarm teammates and human operators to simplify the interaction process is desired in the HST.In this paper, instead of purely considering the workload of human teammates, we propose the computational model of human swarm interaction (HSI) in the simulated map surveillance mission. UAV swarm and human supervisor are both assigned in searching a predefined area of interest (AOI). The workload allocation of map monitoring is adjusted based on the status of the human worker and swarm teammate. Workload, situation awareness ability, trust are formulated as independent models, which affect each other. A communication-aware UAV swarm persistent surveillance algorithm is assigned in the swarm autonomy portion. With the different surveillance task loads, the swarm agent’s thrust parameter adjusts the autonomy level to fit the human operator’s needs. Reinforcement learning is applied in seeking the relative balance of workload in both human and swarm sides. Metrics such as mission accomplishment rate, human supervisor performance, mission performance of UAV swarm are evaluated in the end. The simulation results show that the algorithm could learn the human-machine trust interaction to seek the workload balance to reach better mission execution performance. This work inspires us to leverage a more comprehensive HST model in more practical HMT application scenarios.
García, Kimberly, Zihlmann, Zaira, Mayer, Simon, Tamò-Larrieux, Aurelia, Hooss, Johannes.  2021.  Towards Privacy-Friendly Smart Products. 2021 18th International Conference on Privacy, Security and Trust (PST). :1—7.
Smart products, such as toy robots, must comply with multiple legal requirements of the countries they are sold and used in. Currently, compliance with the legal environment requires manually customizing products for different markets. In this paper, we explore a design approach for smart products that enforces compliance with aspects of the European Union’s data protection principles within a product’s firmware through a toy robot case study. To this end, we present an exchange between computer scientists and legal scholars that identified the relevant data flows, their processing needs, and the implementation decisions that could allow a device to operate while complying with the EU data protection law. By designing a data-minimizing toy robot, we show that the variety, amount, and quality of data that is exposed, processed, and stored outside a user’s premises can be considerably reduced while preserving the device’s functionality. In comparison with a robot designed using a traditional approach, in which 90% of the collected types of information are stored by the data controller or a remote service, our proposed design leads to the mandatory exposure of only 7 out of 15 collected types of information, all of which are legally required by the data controller to demonstrate consent. Moreover, our design is aligned with the Data Privacy Vocabulary, which enables the toy robot to cross geographic borders and seamlessly adjust its data processing activities to the local regulations.
Battistuzzi, Linda, Grassi, Lucrezia, Recchiuto, Carmine Tommaso, Sgorbissa, Antonio.  2021.  Towards Ethics Training in Disaster Robotics: Design and Usability Testing of a Text-Based Simulation. 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :104—109.
Rescue robots are expected to soon become commonplace at disaster sites, where they are increasingly being deployed to provide rescuers with improved access and intervention capabilities while mitigating risks. The presence of robots in operation areas, however, is likely to carry a layer of additional ethical complexity to situations that are already ethically challenging. In addition, limited guidance is available for ethically informed, practical decision-making in real-life disaster settings, and specific ethics training programs are lacking. The contribution of this paper is thus to propose a tool aimed at supporting ethics training for rescuers operating with rescue robots. To this end, we have designed an interactive text-based simulation. The simulation was developed in Python, using Tkinter, Python's de-facto standard GUI. It is designed in accordance with the Case-Based Learning approach, a widely used instructional method that has been found to work well for ethics training. The simulation revolves around a case grounded in ethical themes we identified in previous work on ethical issues in rescue robotics: fairness and discrimination, false or excessive expectations, labor replacement, safety, and trust. Here we present the design of the simulation and the results of usability testing.
Maksuti, Silia, Pickem, Michael, Zsilak, Mario, Stummer, Anna, Tauber, Markus, Wieschhoff, Marcus, Pirker, Dominic, Schmittner, Christoph, Delsing, Jerker.  2021.  Establishing a Chain of Trust in a Sporadically Connected Cyber-Physical System. 2021 IFIP/IEEE International Symposium on Integrated Network Management (IM). :890—895.
Drone based applications have progressed significantly in recent years across many industries, including agriculture. This paper proposes a sporadically connected cyber-physical system for assisting winemakers and minimizing the travel time to remote and poorly connected infrastructures. A set of representative diseases and conditions, which will be monitored by land-bound sensors in combination with multispectral images, is identified. To collect accurate data, a trustworthy and secured communication of the drone with the sensors and the base station should be established. We propose to use an Internet of Things framework for establishing a chain of trust by securely onboarding drones, sensors and base station, and providing self-adaptation support for the use case. Furthermore, we perform a security analysis of the use case for identifying potential threats and security controls that should be in place for mitigating them.
Pang, Yijiang, Liu, Rui.  2021.  Trust-Aware Emergency Response for A Resilient Human-Swarm Cooperative System. 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :15—20.

A human-swarm cooperative system, which mixes multiple robots and a human supervisor to form a mission team, has been widely used for emergent scenarios such as criminal tracking and victim assistance. These scenarios are related to human safety and require a robot team to quickly transit from the current undergoing task into the new emergent task. This sudden mission change brings difficulty in robot motion adjustment and increases the risk of performance degradation of the swarm. Trust in human-human collaboration reflects a general expectation of the collaboration; based on the trust humans mutually adjust their behaviors for better teamwork. Inspired by this, in this research, a trust-aware reflective control (Trust-R), was developed for a robot swarm to understand the collaborative mission and calibrate its motions accordingly for better emergency response. Typical emergent tasks “transit between area inspection tasks”, “response to emergent target - car accident” in social security with eight fault-related situations were designed to simulate robot deployments. A human user study with 50 volunteers was conducted to model trust and assess swarm performance. Trust-R's effectiveness in supporting a robot team for emergency response was validated by improved task performance and increased trust scores.

Doroftei, Daniela, De Vleeschauwer, Tom, Bue, Salvatore Lo, Dewyn, Michaël, Vanderstraeten, Frik, De Cubber, Geert.  2021.  Human-Agent Trust Evaluation in a Digital Twin Context. 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :203—207.
Autonomous systems have the potential to accomplish missions more quickly and effectively, while reducing risks to human operators and costs. However, since the use of autonomous systems is still relatively new, there are still a lot of challenges associated with trusting these systems. Without operators in direct control of all actions, there are significant concerns associated with endangering human lives or damaging equipment. For this reason, NATO has issued a challenge seeking to identify ways to improve decision-maker and operator trust when deploying autonomous systems, and de-risk their adoption. This paper presents the proposal of the winning solution to this NATO challenge. It approaches trust as a multi-dimensional concept, by incorporating the four dimensions of human-agent trust establishment in a digital twin context.
Arafin, Md Tanvir, Kornegay, Kevin.  2021.  Attack Detection and Countermeasures for Autonomous Navigation. 2021 55th Annual Conference on Information Sciences and Systems (CISS). :1—6.
Advances in artificial intelligence, machine learning, and robotics have profoundly impacted the field of autonomous navigation and driving. However, sensor spoofing attacks can compromise critical components and the control mechanisms of mobile robots. Therefore, understanding vulnerabilities in autonomous driving and developing countermeasures remains imperative for the safety of unmanned vehicles. Hence, we demonstrate cross-validation techniques for detecting spoofing attacks on the sensor data in autonomous driving in this work. First, we discuss how visual and inertial odometry (VIO) algorithms can provide a root-of-trust during navigation. Then, we develop examples for sensor data spoofing attacks using the open-source driving dataset. Next, we design an attack detection technique using VIO algorithms that cross-validates the navigation parameters using the IMU and the visual data. Following, we consider hardware-dependent attack survival mechanisms that support an autonomous system during an attack. Finally, we also provide an example of spoofing survival technique using on-board hardware oscillators. Our work demonstrates the applicability of classical mobile robotics algorithms and hardware security primitives in defending autonomous vehicles from targeted cyber attacks.
Lee, Hyo-Cheol, Lee, Seok-Won.  2021.  Towards Provenance-based Trust-aware Model for Socio-Technically Connected Self-Adaptive System. 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC). :761—767.
In a socio-technically connected environment, self-adaptive systems need to cooperate with others to collect information to provide context-dependent functionalities to users. A key component of ensuring safe and secure cooperation is finding trustworthy information and its providers. Trust is an emerging quality attribute that represents the level of belief in the cooperative environments and serves as a promising solution in this regard. In this research, we will focus on analyzing trust characteristics and defining trust-aware models through the trust-aware goal model and the provenance model. The trust-aware goal model is designed to represent the trust-related requirements and their relationships. The provenance model is analyzed as trust evidence to be used for the trust evaluation. The proposed approach contributes to build a comprehensive understanding of trust and design a trust-aware self-adaptive system. In order to show the feasibility of the proposed approach, we will conduct a case study with the crowd navigation system for an unmanned vehicle system.
Esterwood, Connor, Robert, Lionel P..  2021.  Do You Still Trust Me? Human-Robot Trust Repair Strategies 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :183—188.
Trust is vital to promoting human and robot collaboration, but like human teammates, robots make mistakes that undermine trust. As a result, a human’s perception of his or her robot teammate’s trustworthiness can dramatically decrease [1], [2], [3], [4]. Trustworthiness consists of three distinct dimensions: ability (i.e. competency), benevolence (i.e. concern for the trustor) and integrity (i.e. honesty) [5], [6]. Taken together, decreases in trustworthiness decreases trust in the robot [7]. To address this, we conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment. Preliminary results of the first 164 participants (between 19 and 24 per cell) highlight which repair strategies are effective relative to ability, integrity and benevolence and the robot’s anthropomorphism. Overall, this paper contributes to the HRI trust repair literature.
2021-02-03
Rabby, M. K. Monir, Khan, M. Altaf, Karimoddini, A., Jiang, S. X..  2020.  Modeling of Trust Within a Human-Robot Collaboration Framework. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4267—4272.

In this paper, a time-driven performance-aware mathematical model for trust in the robot is proposed for a Human-Robot Collaboration (HRC) framework. The proposed trust model is based on both the human operator and the robot performances. The human operator’s performance is modeled based on both the physical and cognitive performances, while the robot performance is modeled over its unpredictable, predictable, dependable, and faithful operation regions. The model is validated via different simulation scenarios. The simulation results show that the trust in the robot in the HRC framework is governed by robot performance and human operator’s performance and can be improved by enhancing the robot performance.

Bellas, A., Perrin, S., Malone, B., Rogers, K., Lucas, G., Phillips, E., Tossell, C., Visser, E. d.  2020.  Rapport Building with Social Robots as a Method for Improving Mission Debriefing in Human-Robot Teams. 2020 Systems and Information Engineering Design Symposium (SIEDS). :160—163.

Conflicts may arise at any time during military debriefing meetings, especially in high intensity deployed settings. When such conflicts arise, it takes time to get everyone back into a receptive state of mind so that they engage in reflective discussion rather than unproductive arguing. It has been proposed by some that the use of social robots equipped with social abilities such as emotion regulation through rapport building may help to deescalate these situations to facilitate critical operational decisions. However, in military settings, the same AI agent used in the pre-brief of a mission may not be the same one used in the debrief. The purpose of this study was to determine whether a brief rapport-building session with a social robot could create a connection between a human and a robot agent, and whether consistency in the embodiment of the robot agent was necessary for maintaining this connection once formed. We report the results of a pilot study conducted at the United States Air Force Academy which simulated a military mission (i.e., Gravity and Strike). Participants' connection with the agent, sense of trust, and overall likeability revealed that early rapport building can be beneficial for military missions.

Xu, J., Howard, A..  2020.  How much do you Trust your Self-Driving Car? Exploring Human-Robot Trust in High-Risk Scenarios 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4273—4280.

Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.

Lyons, J. B., Nam, C. S., Jessup, S. A., Vo, T. Q., Wynne, K. T..  2020.  The Role of Individual Differences as Predictors of Trust in Autonomous Security Robots. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—5.

This research used an Autonomous Security Robot (ASR) scenario to examine public reactions to a robot that possesses the authority and capability to inflict harm on a human. Individual differences in terms of personality and Perfect Automation Schema (PAS) were examined as predictors of trust in the ASR. Participants (N=316) from Amazon Mechanical Turk (MTurk) rated their trust of the ASR and desire to use ASRs in public and military contexts following a 2-minute video depicting the robot interacting with three research confederates. The video showed the robot using force against one of the three confederates with a non-lethal device. Results demonstrated that individual differences factors were related to trust and desired use of the ASR. Agreeableness and both facets of the PAS (high expectations and all-or-none beliefs) demonstrated unique associations with trust using multiple regression techniques. Agreeableness, intellect, and high expectations were uniquely related to desired use for both public and military domains. This study showed that individual differences influence trust and one's desired use of ASRs, demonstrating that societal reactions to ASRs may be subject to variation among individuals.

Xu, J., Howard, A..  2020.  Would you Take Advice from a Robot? Developing a Framework for Inferring Human-Robot Trust in Time-Sensitive Scenarios 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :814—820.

Trust is a key element for successful human-robot interaction. One challenging problem in this domain is the issue of how to construct a formulation that optimally models this trust phenomenon. This paper presents a framework for modeling human-robot trust based on representing the human decision-making process as a formulation based on trust states. Using this formulation, we then discuss a generalized model of human-robot trust based on Hidden Markov Models and Logistic Regression. The proposed approach is validated on datasets collected from two different human subject studies in which the human is provided the ability to take advice from a robot. Both experimental scenarios were time-sensitive, in that a decision had to be made by the human in a limited time period, but each scenario featured different levels of cognitive load. The experimental results demonstrate that the proposed formulation can be utilized to model trust, in which the system can predict whether the human will decide to take advice (or not) from the robot. It was found that our prediction performance degrades after the robot made a mistake. The validation of this approach on two scenarios implies that this model can be applied to other interactive scenarios as long as the interaction dynamics fits into the proposed formulation. Directions for future improvements are discussed.

Alarcon, G. M., Gibson, A. M., Jessup, S. A..  2020.  Trust Repair in Performance, Process, and Purpose Factors of Human-Robot Trust. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—6.

The current study explored the influence of trust and distrust behaviors on performance, process, and purpose (trustworthiness) perceptions over time when participants were paired with a robot partner. We examined the changes in trustworthiness perceptions after trust violations and trust repair after those violations. Results indicated performance, process, and purpose perceptions were all affected by trust violations, but perceptions of process and purpose decreased more than performance following a distrust behavior. Similarly, trust repair was achieved in performance perceptions, but trust repair in perceived process and purpose was absent. When a trust violation occurred, process and purpose perceptions deteriorated and failed to recover from the violation. In addition, the trust violation resulted in untrustworthy perceptions of the robot. In contrast, trust violations decreased partner performance perceptions, and subsequent trust behaviors resulted in a trust repair. These findings suggest that people are more sensitive to distrust behaviors in their perceptions of process and purpose than they are in performance perceptions.