Visible to the public Biblio

Filters: Keyword is human-robot interaction  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
A
Lee, K., Reardon, C., Fink, J..  2018.  Augmented Reality in Human-Robot Cooperative Search. 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :1–1.

Robots operating alongside humans in field environments have the potential to greatly increase the situational awareness of their human teammates. A significant challenge, however, is the efficient conveyance of what the robot perceives to the human in order to achieve improved situational awareness. We believe augmented reality (AR), which allows a human to simultaneously perceive the real world and digital information situated virtually in the real world, has the potential to address this issue. We propose to demonstrate that augmented reality can be used to enable human-robot cooperative search, where the robot can both share search results and assist the human teammate in navigating to a search target.

C
Poulsen, A., Burmeister, O. K., Tien, D..  2018.  Care Robot Transparency Isn't Enough for Trust. 2018 IEEE Region Ten Symposium (Tensymp). :293—297.

A recent study featuring a new kind of care robot indicated that participants expect a robot's ethical decision-making to be transparent to develop trust, even though the same type of `inspection of thoughts' isn't expected of a human carer. At first glance, this might suggest that robot transparency mechanisms are required for users to develop trust in robot-made ethical decisions. But the participants were found to desire transparency only when they didn't know the specifics of a human-robot social interaction. Humans trust others without observing their thoughts, which implies other means of determining trustworthiness. The study reported here suggests that the method is social interaction and observation, signifying that trust is a social construct. Moreover, that `social determinants of trust' are the transparent elements. This socially determined behaviour draws on notions of virtue ethics. If a caregiver (nurse or robot) consistently provides good, ethical care, then patients can trust that caregiver to do so often. The same social determinants may apply to care robots and thus it ought to be possible to trust them without the ability to see their thoughts. This study suggests why transparency mechanisms may not be effective in helping to develop trust in care robot ethical decision-making. It suggests that roboticists need to build sociable elements into care robots to help patients to develop patient trust in the care robot's ethical decision-making.

He, Hongmei, Gray, John, Cangelosi, Angelo, Meng, Qinggang, McGinnity, T. M., Mehnen, Jörn.  2020.  The Challenges and Opportunities of Artificial Intelligence for Trustworthy Robots and Autonomous Systems. 2020 3rd International Conference on Intelligent Robotic and Control Engineering (IRCE). :68–74.
Trust is essential in designing autonomous and semiautonomous Robots and Autonomous Systems (RAS), because of the ``No trust, no use'' concept. RAS should provide high quality services, with four key properties that make them trustworthy: they must be (i) robust with regards to any system health related issues, (ii) safe for any matters in their surrounding environments, (iii) secure against any threats from cyber spaces, and (iv) trusted for human-machine interaction. This article thoroughly analyses the challenges in implementing the trustworthy RAS in respects of the four properties, and addresses the power of AI in improving the trustworthiness of RAS. While we focus on the benefits that AI brings to human, we should realize the potential risks that could be caused by AI. This article introduces for the first time the set of key aspects of human-centered AI for RAS, which can serve as a cornerstone for implementing trustworthy RAS by design in the future.
Nielsen, C., Mathiesen, M., Nielsen, J., Jensen, L. C..  2019.  Changes in Heart Rate and Feeling of Safety When Led by a Rehabilitation Robot. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :580—581.

Trust is an important topic in medical human-robot interaction, since patients may be more fragile than other groups of people. This paper investigates the issue of users' trust when interacting with a rehabilitation robot. In the study, we investigate participants' heart rate and perception of safety in a scenario when their arm is led by the rehabilitation robot in two types of exercises at three different velocities. The participants' heart rate are measured during each exercise and the participants are asked how safe they feel after each exercise. The results showed that velocity and type of exercise has no significant influence on the participants' heart rate, but they do have significant influence on how safe they feel. We found that increasing velocity and longer exercises negatively influence participants' perception of safety.

Reardon, C., Lee, K., Fink, J..  2018.  Come See This! Augmented Reality to Enable Human-Robot Cooperative Search. 2018 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :1—7.

Robots operating alongside humans in field environments have the potential to greatly increase the situational awareness of their human teammates. A significant challenge, however, is the efficient conveyance of what the robot perceives to the human in order to achieve improved situational awareness. We believe augmented reality (AR), which allows a human to simultaneously perceive the real world and digital information situated virtually in the real world, has the potential to address this issue. Motivated by the emerging prevalence of practical human-wearable AR devices, we present a system that enables a robot to perform cooperative search with a human teammate, where the robot can both share search results and assist the human teammate in navigation to the search target. We demonstrate this ability in a search task in an uninstrumented environment where the robot identifies and localizes targets and provides navigation direction via AR to bring the human to the correct target.

Udeh, Chinonso Paschal, Chen, Luefeng, Du, Sheng, Li, Min, Wu, Min.  2022.  A Co-regularization Facial Emotion Recognition Based on Multi-Task Facial Action Unit Recognition. 2022 41st Chinese Control Conference (CCC). :6806—6810.
Facial emotion recognition helps feed the growth of the future artificial intelligence with the development of emotion recognition, learning, and analysis of different angles of a human face and head pose. The world's recent pandemic gave rise to the rapid installment of facial recognition for fewer applications, while emotion recognition is still within the experimental boundaries. The current challenges encountered with facial emotion recognition (FER) are the difference between background noises. Since today's world shows us that humans soon need robotics in the most significant role of human perception, attention, memory, decision-making, and human-robot interaction (HRI) needs employees. By merging the head pose as a combination towards the FER to boost the robustness in understanding emotions using the convolutional neural networks (CNN). The stochastic gradient descent with a comprehensive model is adopted by applying multi-task learning capable of implicit parallelism, inherent and better global optimizer in finding better network weights. After executing a multi-task learning model using two independent datasets, the experiment with the FER and head pose learning multi-views co-regularization frameworks were subsequently merged with validation accuracy.
D
Herse, S., Vitale, J., Tonkin, M., Ebrahimian, D., Ojha, S., Johnston, B., Judge, W., Williams, M..  2018.  Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :7—14.

When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.

E
Haider, C., Chebotarev, Y., Tsiourti, C., Vincze, M..  2019.  Effects of Task-Dependent Robot Errors on Trust in Human-Robot Interaction: A Pilot Study. 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :172—177.

The growing diffusion of robotics in our daily life demands a deeper understanding of the mechanisms of trust in human-robot interaction. The performance of a robot is one of the most important factors influencing the trust of a human user. However, it is still unclear whether the circumstances in which a robot fails to affect the user's trust. We investigate how the perception of robot failures may influence the willingness of people to cooperate with the robot by following its instructions in a time-critical task. We conducted an experiment in which participants interacted with a robot that had previously failed in a related or an unrelated task. We hypothesized that users' observed and self-reported trust ratings would be higher in the condition where the robot has previously failed in an unrelated task. A proof-of-concept study with nine participants timidly confirms our hypothesis. At the same time, our results reveal some flaws in the design experimental, and encourage a future large scale study.

Moolchandani, Pooja, Hayes, Cory J., Marge, Matthew.  2018.  Evaluating Robot Behavior in Response to Natural Language. Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. :197–198.
Human-robot teaming can be improved if a robot»s actions meet human users» expectations. The goal of this research is to determine what variations of robot actions in response to natural language match human judges» expectations in a series of tasks. We conducted a study with 21 volunteers that analyzed how a virtual robot behaved when executing eight navigation instructions from a corpus of human-robot dialogue. Initial findings suggest that movement more accurately meets human expectation when the robot (1) navigates with an awareness of its environment and (2) demonstrates a sense of self-safety.
Aylett, Ruth, Broz, Frank, Ghosh, Ayan, McKenna, Peter, Rajendran, Gnanathusharan, Foster, Mary Ellen, Roffo, Giorgio, Vinciarelli, Alessandro.  2017.  Evaluating Robot Facial Expressions. Proceedings of the 19th ACM International Conference on Multimodal Interaction. :516–517.

This paper outlines a demonstration of the work carried out in the SoCoRo project investigating how far a neuro-typical population recognises facial expressions on a non-naturalistic robot face that are designed to show approval and disapproval. RFID-tagged objects are presented to an Emys robot head (called Alyx) and Alyx reacts to each with a facial expression. Participants are asked to put the object in a box marked 'Like' or 'Dislike'. This study is being extended to include assessment of participants' Autism Quotient using a validated questionnaire as a step towards using a robot to help train high-functioning adults with an Autism Spectrum Disorder in social signal recognition.

Rossi, Alessandra, Andriella, Antonio, Rossi, Silvia, Torras, Carme, Alenyà, Guillem.  2022.  Evaluating the Effect of Theory of Mind on People’s Trust in a Faulty Robot. 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :477–482.
The success of human-robot interaction is strongly affected by the people’s ability to infer others’ intentions and behaviours, and the level of people’s trust that others will abide by their same principles and social conventions to achieve a common goal. The ability of understanding and reasoning about other agents’ mental states is known as Theory of Mind (ToM). ToM and trust, therefore, are key factors in the positive outcome of human-robot interaction. We believe that a robot endowed with a ToM is able to gain people’s trust, even when this may occasionally make errors.In this work, we present a user study in the field in which participants (N=123) interacted with a robot that may or may not have a ToM, and may or may not exhibit erroneous behaviour. Our findings indicate that a robot with ToM is perceived as more reliable, and they trusted it more than a robot without a ToM even when the robot made errors. Finally, ToM results to be a key driver for tuning people’s trust in the robot even when the initial condition of the interaction changed (i.e., loss and regain of trust in a longer relationship).
ISSN: 1944-9437
Robinette, P., Novitzky, M., Fitzgerald, C., Benjamin, M. R., Schmidt, H..  2019.  Exploring Human-Robot Trust During Teaming in a Real-World Testbed. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :592—593.

Project Aquaticus is a human-robot teaming competition on the water involving autonomous surface vehicles and human operated motorized kayaks. Teams composed of both humans and robots share the same physical environment to play capture the flag. In this paper, we present results from seven competitions of our half-court (one participant versus one robot) game. We found that participants indicated more trust in more aggressive behaviors from robots.

F
Gao, Y., Sibirtseva, E., Castellano, G., Kragic, D..  2019.  Fast Adaptation with Meta-Reinforcement Learning for Trust Modelling in Human-Robot Interaction. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :305—312.

In socially assistive robotics, an important research area is the development of adaptation techniques and their effect on human-robot interaction. We present a meta-learning based policy gradient method for addressing the problem of adaptation in human-robot interaction and also investigate its role as a mechanism for trust modelling. By building an escape room scenario in mixed reality with a robot, we test our hypothesis that bi-directional trust can be influenced by different adaptation algorithms. We found that our proposed model increased the perceived trustworthiness of the robot and influenced the dynamics of gaining human's trust. Additionally, participants evaluated that the robot perceived them as more trustworthy during the interactions with the meta-learning based adaptation compared to the previously studied statistical adaptation model.

H
Esterwood, Connor, Robert, Lionel P..  2022.  Having the Right Attitude: How Attitude Impacts Trust Repair in Human—Robot Interaction. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :332–341.
Robot co-workers, like human co-workers, make mistakes that undermine trust. Yet, trust is just as important in promoting human-robot collaboration as it is in promoting human-human collaboration. In addition, individuals can signif-icantly differ in their attitudes toward robots, which can also impact or hinder their trust in robots. To better understand how individual attitude can influence trust repair strategies, we propose a theoretical model that draws from the theory of cognitive dissonance. To empirically verify this model, we conducted a between-subjects experiment with 100 participants assigned to one of four repair strategies (apologies, denials, explanations, or promises) over three trust violations. Individual attitudes did moderate the efficacy of repair strategies and this effect differed over successive trust violations. Specifically, repair strategies were most effective relative to individual attitude during the second of the three trust violations, and promises were the trust repair strategy most impacted by an individual's attitude.
Razin, Y. S., Feigh, K. M..  2020.  Hitting the Road: Exploring Human-Robot Trust for Self-Driving Vehicles. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—6.

With self-driving cars making their way on to our roads, we ask not what it would take for them to gain acceptance among consumers, but what impact they may have on other drivers. How they will be perceived and whether they will be trusted will likely have a major effect on traffic flow and vehicular safety. This work first undertakes an exploratory factor analysis to validate a trust scale for human-robot interaction and shows how previously validated metrics and general trust theory support a more complete model of trust that has increased applicability in the driving domain. We experimentally test this expanded model in the context of human-automation interaction during simulated driving, revealing how using these dimensions uncovers significant biases within human-robot trust that may have particularly deleterious effects when it comes to sharing our future roads with automated vehicles.

Ogawa, R., Park, S., Umemuro, H..  2019.  How Humans Develop Trust in Communication Robots: A Phased Model Based on Interpersonal Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :606—607.

The purpose of this study was to propose a model of development of trust in social robots. Insights in interpersonal trust were adopted from social psychology and a novel model was proposed. In addition, this study aimed to investigate the relationship among trust development and self-esteem. To validate the proposed model, an experiment using a communication robot NAO was conducted and changes in categories of trust as well as self-esteem were measured. Results showed that general and category trust have been developed in the early phase. Self-esteem is also increased along the interactions with the robot.

Xu, J., Howard, A..  2020.  How much do you Trust your Self-Driving Car? Exploring Human-Robot Trust in High-Risk Scenarios 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4273—4280.

Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.

Rossi, A., Dautenhahn, K., Koay, K. Lee, Walters, M. L..  2020.  How Social Robots Influence People’s Trust in Critical Situations. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :1020—1025.

As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people's trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. We conducted a between-subjects study where participants' trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants' choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.

Gutzwiller, R. S., Reeder, J..  2017.  Human interactive machine learning for trust in teams of autonomous robots. 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). :1–3.

Unmanned systems are increasing in number, while their manning requirements remain the same. To decrease manpower demands, machine learning techniques and autonomy are gaining traction and visibility. One barrier is human perception and understanding of autonomy. Machine learning techniques can result in “black box” algorithms that may yield high fitness, but poor comprehension by operators. However, Interactive Machine Learning (IML), a method to incorporate human input over the course of algorithm development by using neuro-evolutionary machine-learning techniques, may offer a solution. IML is evaluated here for its impact on developing autonomous team behaviors in an area search task. Initial findings show that IML-generated search plans were chosen over plans generated using a non-interactive ML technique, even though the participants trusted them slightly less. Further, participants discriminated each of the two types of plans from each other with a high degree of accuracy, suggesting the IML approach imparts behavioral characteristics into algorithms, making them more recognizable. Together the results lay the foundation for exploring how to team humans successfully with ML behavior.

Byrne, K., Marín, C..  2018.  Human Trust in Robots When Performing a Service. 2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). :9—14.

The presence of robots is becoming more apparent as technology progresses and the market focus transitions from smart phones to robotic personal assistants such as those provided by Amazon and Google. The integration of robots in our societies is an inevitable tendency in which robots in many forms and with many functionalities will provide services to humans. This calls for an understanding of how humans are affected by both the presence of and the reliance on robots to perform services for them. In this paper we explore the effects that robots have on humans when a service is performed on request. We expose three groups of human participants to three levels of service completion performed by robots. We record and analyse human perceptions such as propensity to trust, competency, responsiveness, sociability, and team work ability. Our results demonstrate that humans tend to trust robots and are more willing to interact with them when they autonomously recover from failure by requesting help from other robots to fulfil their service. This supports the view that autonomy and team working capabilities must be brought into robots in an effort to strengthen trust in robots performing a service.

I
Sebo, S. S., Krishnamurthi, P., Scassellati, B..  2019.  “I Don't Believe You”: Investigating the Effects of Robot Trust Violation and Repair. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :57—65.

When a robot breaks a person's trust by making a mistake or failing, continued interaction will depend heavily on how the robot repairs the trust that was broken. Prior work in psychology has demonstrated that both the trust violation framing and the trust repair strategy influence how effectively trust can be restored. We investigate trust repair between a human and a robot in the context of a competitive game, where a robot tries to restore a human's trust after a broken promise, using either a competence or integrity trust violation framing and either an apology or denial trust repair strategy. Results from a 2×2 between-subjects study ( n=82) show that participants interacting with a robot employing the integrity trust violation framing and the denial trust repair strategy are significantly more likely to exhibit behavioral retaliation toward the robot. In the Dyadic Trust Scale survey, an interaction between trust violation framing and trust repair strategy was observed. Our results demonstrate the importance of considering both trust violation framing and trust repair strategy choice when designing robots to repair trust. We also discuss the influence of human-to-robot promises and ethical considerations when framing and repairing trust between a human and robot.

Xu, J., Howard, A..  2018.  The Impact of First Impressions on Human- Robot Trust During Problem-Solving Scenarios. 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :435—441.

With recent advances in robotics, it is expected that robots will become increasingly common in human environments, such as in the home and workplaces. Robots will assist and collaborate with humans on a variety of tasks. During these collaborations, it is inevitable that disagreements in decisions would occur between humans and robots. Among factors that lead to which decision a human should ultimately follow, theirs or the robot, trust is a critical factor to consider. This study aims to investigate individuals' behaviors and aspects of trust in a problem-solving situation in which a decision must be made in a bounded amount of time. A between-subject experiment was conducted with 100 participants. With the assistance of a humanoid robot, participants were requested to tackle a cognitive-based task within a given time frame. Each participant was randomly assigned to one of the following initial conditions: 1) a working robot in which the robot provided a correct answer or 2) a faulty robot in which the robot provided an incorrect answer. Impacts of the faulty robot behavior on participant's decision to follow the robot's suggested answer were analyzed. Survey responses about trust were collected after interacting with the robot. Results indicated that the first impression has a significant impact on participant's behavior of trusting a robot's advice during a disagreement. In addition, this study discovered evidence supporting that individuals still have trust in a malfunctioning robot even after they have observed a robot's faulty behavior.

L
Ko, Wilson K.H., Wu, Yan, Tee, Keng Peng.  2016.  LAP: A Human-in-the-loop Adaptation Approach for Industrial Robots. Proceedings of the Fourth International Conference on Human Agent Interaction. :313–319.

In the last few years, a shift from mass production to mass customisation is observed in the industry. Easily reprogrammable robots that can perform a wide variety of tasks are desired to keep up with the trend of mass customisation while saving costs and development time. Learning by Demonstration (LfD) is an easy way to program the robots in an intuitive manner and provides a solution to this problem. In this work, we discuss and evaluate LAP, a three-stage LfD method that conforms to the criteria for the high-mix-low-volume (HMLV) industrial settings. The algorithm learns a trajectory in the task space after which small segments can be adapted on-the-fly by using a human-in-the-loop approach. The human operator acts as a high-level adaptation, correction and evaluation mechanism to guide the robot. This way, no sensors or complex feedback algorithms are needed to improve robot behaviour, so errors and inaccuracies induced by these subsystems are avoided. After the system performs at a satisfactory level after the adaptation, the operator will be removed from the loop. The robot will then proceed in a feed-forward fashion to optimise for speed. We demonstrate this method by simulating an industrial painting application. A KUKA LBR iiwa is taught how to draw an eight figure which is reshaped by the operator during adaptation.

Kumar, Suren, Dhiman, Vikas, Koch, Parker A, Corso, Jason J..  2018.  Learning Compositional Sparse Bimodal Models. IEEE Transactions on Pattern Analysis and Machine Intelligence. 40:1032—1044.

Various perceptual domains have underlying compositional semantics that are rarely captured in current models. We suspect this is because directly learning the compositional structure has evaded these models. Yet, the compositional structure of a given domain can be grounded in a separate domain thereby simplifying its learning. To that end, we propose a new approach to modeling bimodal perceptual domains that explicitly relates distinct projections across each modality and then jointly learns a bimodal sparse representation. The resulting model enables compositionality across these distinct projections and hence can generalize to unobserved percepts spanned by this compositional basis. For example, our model can be trained on red triangles and blue squares; yet, implicitly will also have learned red squares and blue triangles. The structure of the projections and hence the compositional basis is learned automatically; no assumption is made on the ordering of the compositional elements in either modality. Although our modeling paradigm is general, we explicitly focus on a tabletop building-blocks setting. To test our model, we have acquired a new bimodal dataset comprising images and spoken utterances of colored shapes (blocks) in the tabletop setting. Our experiments demonstrate the benefits of explicitly leveraging compositionality in both quantitative and human evaluation studies.

Ye, S., Feigh, K., Howard, A..  2020.  Learning in Motion: Dynamic Interactions for Increased Trust in Human-Robot Interaction Games. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :1186—1189.

Embodiment of actions and tasks has typically been analyzed from the robot's perspective where the robot's embodiment helps develop and maintain trust. However, we ask a similar question looking at the interaction from the human perspective. Embodied cognition has been shown in the cognitive science literature to produce increased social empathy and cooperation. To understand how human embodiment can help develop and increase trust in human-robot interactions, we created conducted a study where participants were tasked with memorizing greek letters associated with dance motions with the help of a humanoid robot. Participants either performed the dance motion or utilized a touch screen during the interaction. The results showed that participants' trust in the robot increased at a higher rate during human embodiment of motions as opposed to utilizing a touch screen device.