Visible to the public Biblio

Found 222 results

Filters: Keyword is Trust  [Clear All Filters]
2021-02-01
Han, W., Schulz, H.-J..  2020.  Beyond Trust Building — Calibrating Trust in Visual Analytics. 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). :9–15.
Trust is a fundamental factor in how users engage in interactions with Visual Analytics (VA) systems. While the importance of building trust to this end has been pointed out in research, the aspect that trust can also be misplaced is largely ignored in VA so far. This position paper addresses this aspect by putting trust calibration in focus – i.e., the process of aligning the user’s trust with the actual trustworthiness of the VA system. To this end, we present the trust continuum in the context of VA, dissect important trust issues in both VA systems and users, as well as discuss possible approaches that can build and calibrate trust.
Ajenaghughrure, I. B., Sousa, S. C. da Costa, Lamas, D..  2020.  Risk and Trust in artificial intelligence technologies: A case study of Autonomous Vehicles. 2020 13th International Conference on Human System Interaction (HSI). :118–123.
This study investigates how risk influences users' trust before and after interactions with technologies such as autonomous vehicles (AVs'). Also, the psychophysiological correlates of users' trust from users” eletrodermal activity responses. Eighteen (18) carefully selected participants embark on a hypothetical trip playing an autonomous vehicle driving game. In order to stay safe, throughout the drive experience under four risk conditions (very high risk, high risk, low risk and no risk) that are based on automotive safety and integrity levels (ASIL D, C, B, A), participants exhibit either high or low trust by evaluating the AVs' to be highly or less trustworthy and consequently relying on the Artificial intelligence or the joystick to control the vehicle. The result of the experiment shows that there is significant increase in users' trust and user's delegation of controls to AVs' as risk decreases and vice-versa. In addition, there was a significant difference between user's initial trust before and after interacting with AVs' under varying risk conditions. Finally, there was a significant correlation in users' psychophysiological responses (electrodermal activity) when exhibiting higher and lower trust levels towards AVs'. The implications of these results and future research opportunities are discussed.
Papadopoulos, A. V., Esterle, L..  2020.  Situational Trust in Self-aware Collaborating Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :91–94.
Trust among humans affects the way we interact with each other. In autonomous systems, this trust is often predefined and hard-coded before the systems are deployed. However, when systems encounter unfolding situations, requiring them to interact with others, a notion of trust will be inevitable. In this paper, we discuss trust as a fundamental measure to enable an autonomous system to decide whether or not to interact with another system, whether biological or artificial. These decisions become increasingly important when continuously integrating with others during runtime.
Hou, M..  2020.  IMPACT: A Trust Model for Human-Agent Teaming. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
A trust model IMPACT: Intention, Measurability, Predictability, Agility, Communication, and Transparency has been conceptualized to build human trust in autonomous agents. The six critical characteristics must be exhibited by the agents in order to gain and maintain the trust from their human partners towards an effective and collaborative team in achieving common goals. The IMPACT model guided a design of an intelligent adaptive decision aid for dynamic target engagement processes in a military context. Positive feedback from subject matter experts participated in a large scale joint exercise controlling multiple unmanned vehicles indicated the effectiveness of the decision aid. It also demonstrated the utility of the IMPACT model as design principles for building up a trusted human-agent teaming.
2020-12-21
Jithish, J., Sankaran, S., Achuthan, K..  2020.  Towards Ensuring Trustworthiness in Cyber-Physical Systems: A Game-Theoretic Approach. 2020 International Conference on COMmunication Systems NETworkS (COMSNETS). :626–629.

The emergence of Cyber-Physical Systems (CPSs) is a potential paradigm shift for the usage of Information and Communication Technologies (ICT). From predominantly a facilitator of information and communication services, the role of ICT in the present age has expanded to the management of objects and resources in the physical world. Thus, it is imperative to devise mechanisms to ensure the trustworthiness of data to secure vulnerable devices against security threats. This work presents an analytical framework based on non-cooperative game theory to evaluate the trustworthiness of individual sensor nodes that constitute the CPS. The proposed game-theoretic model captures the factors impacting the trustworthiness of CPS sensor nodes. Further, the model is used to estimate the Nash equilibrium solution of the game, to derive a trust threshold criterion. The trust threshold represents the minimum trust score required to be maintained by individual sensor nodes during CPS operation. Sensor nodes with trust scores below the threshold are potentially malicious and may be removed or isolated to ensure the secure operation of CPS.

2020-12-07
Islam, M. M., Karmakar, G., Kamruzzaman, J., Murshed, M..  2019.  Measuring Trustworthiness of IoT Image Sensor Data Using Other Sensors’ Complementary Multimodal Data. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :775–780.
Trust of image sensor data is becoming increasingly important as the Internet of Things (IoT) applications grow from home appliances to surveillance. Up to our knowledge, there exists only one work in literature that estimates trustworthiness of digital images applied to forensic applications, based on a machine learning technique. The efficacy of this technique is heavily dependent on availability of an appropriate training set and adequate variation of IoT sensor data with noise, interference and environmental condition, but availability of such data cannot be assured always. Therefore, to overcome this limitation, a robust method capable of estimating trustworthy measure with high accuracy is needed. Lowering cost of sensors allow many IoT applications to use multiple types of sensors to observe the same event. In such cases, complementary multimodal data of one sensor can be exploited to measure trust level of another sensor data. In this paper, for the first time, we introduce a completely new approach to estimate the trustworthiness of an image sensor data using another sensor's numerical data. We develop a theoretical model using the Dempster-Shafer theory (DST) framework. The efficacy of the proposed model in estimating trust level of an image sensor data is analyzed by observing a fire event using IoT image and temperature sensor data in a residential setup under different scenarios. The proposed model produces highly accurate trust level in all scenarios with authentic and forged image data.
Challagidad, P. S., Birje, M. N..  2019.  Determination of Trustworthiness of Cloud Service Provider and Cloud Customer. 2019 5th International Conference on Advanced Computing Communication Systems (ICACCS). :839–843.
In service-oriented computing environment (e.g. cloud computing), Cloud Customers (CCs) and Cloud Service Providers (CSPs) require to calculate the trust ranks of impending partner prior to appealing in communications. Determining trustworthiness dynamically is a demanding dilemma in an open and dynamic environment (such as cloud computing) because of many CSPs providing same types of services. Presently, there are very less number of dynamic trust evaluation scheme that permits CCs to evaluate CSPs trustworthiness from multi-dimensional perspectives. Similarly, there is no scheme that permits CSPs to evaluate trustworthiness of CCs. This paper proposes a Multidimensional Dynamic Trust Evaluation Scheme (MDTES) that facilitates CCs to evaluate the trustworthiness of CSPs from various viewpoints. Similar approach can be employed by CSPs to evaluate the trustworthiness of CCs. The proposed MDTES helps CCs to choose trustworthy CSP and to have desired QoS requirements and CSPs to choose desired and legal CCs. The simulation results illustrate the MDTES is dynamic and steady in distinguishing trustworthy and untrustworthy CSPs and CCs.
2020-12-02
Wang, Q., Zhao, W., Yang, J., Wu, J., Hu, W., Xing, Q..  2019.  DeepTrust: A Deep User Model of Homophily Effect for Trust Prediction. 2019 IEEE International Conference on Data Mining (ICDM). :618—627.

Trust prediction in online social networks is crucial for information dissemination, product promotion, and decision making. Existing work on trust prediction mainly utilizes the network structure or the low-rank approximation of a trust network. These approaches can suffer from the problem of data sparsity and prediction accuracy. Inspired by the homophily theory, which shows a pervasive feature of social and economic networks that trust relations tend to be developed among similar people, we propose a novel deep user model for trust prediction based on user similarity measurement. It is a comprehensive data sparsity insensitive model that combines a user review behavior and the item characteristics that this user is interested in. With this user model, we firstly generate a user's latent features mined from user review behavior and the item properties that the user cares. Then we develop a pair-wise deep neural network to further learn and represent these user features. Finally, we measure the trust relations between a pair of people by calculating the user feature vector cosine similarity. Extensive experiments are conducted on two real-world datasets, which demonstrate the superior performance of the proposed approach over the representative baseline works.

Narang, S., Byali, M., Dayama, P., Pandit, V., Narahari, Y..  2019.  Design of Trusted B2B Market Platforms using Permissioned Blockchains and Game Theory. 2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC). :385—393.

Trusted collaboration satisfying the requirements of (a) adequate transparency and (b) preservation of privacy of business sensitive information is a key factor to ensure the success and adoption of online business-to-business (B2B) collaboration platforms. Our work proposes novel ways of stringing together game theoretic modeling, blockchain technology, and cryptographic techniques to build such a platform for B2B collaboration involving enterprise buyers and sellers who may be strategic. The B2B platform builds upon three ideas. The first is to use a permissioned blockchain with smart contracts as the technical infrastructure for building the platform. Second, the above smart contracts implement deep business logic which is derived using a rigorous analysis of a repeated game model of the strategic interactions between buyers and sellers to devise strategies to induce honest behavior from buyers and sellers. Third, we present a formal framework that captures the essential requirements for secure and private B2B collaboration, and, in this direction, we develop cryptographic regulation protocols that, in conjunction with the blockchain, help implement such a framework. We believe our work is an important first step in the direction of building a platform that enables B2B collaboration among strategic and competitive agents while maximizing social welfare and addressing the privacy concerns of the agents.

Vaka, A., Manasa, G., Sameer, G., Das, B..  2019.  Generation And Analysis Of Trust Networks. 2019 1st International Conference on Advances in Information Technology (ICAIT). :443—448.

Trust is known to be a key component in human social relationships. It is trust that defines human behavior with others to a large extent. Generative models have been extensively used in social networks study to simulate different characteristics and phenomena in social graphs. In this work, an attempt is made to understand how trust in social graphs can be combined with generative modeling techniques to generate trust-based social graphs. These generated social graphs are then compared with the original social graphs to evaluate how trust helps in generative modeling. Two well-known social network data sets i.e. the soc-Bitcoin and the wiki administrator network data sets are used in this work. Social graphs are generated from these data sets and then compared with the original graphs along with other standard generative modeling techniques to see how trust is a good component in this. Other Generative modeling techniques have been available for a while but this investigation with the real social graph data sets validate that trust can be an important factor in generative modeling.

Wang, W., Xuan, S., Yang, W., Chen, Y..  2019.  User Credibility Assessment Based on Trust Propagation in Microblog. 2019 Computing, Communications and IoT Applications (ComComAp). :270—275.

Nowadays, Microblog has become an important online social networking platform, and a large number of users share information through Microblog. Many malicious users have released various false news driven by various interests, which seriously affects the availability of Microblog platform. Therefore, the evaluation of Microblog user credibility has become an important research issue. This paper proposes a microblog user credibility evaluation algorithm based on trust propagation. In view of the high consumption and low precision caused by malicious users' attacking algorithms and manual selection of seed sets by establishing false social relationships, this paper proposes two optimization strategies: pruning algorithm based on social activity and similarity and based on The seed node selection algorithm of clustering. The pruning algorithm can trim off the attack edges established by malicious users and normal users. The seed node selection algorithm can efficiently select the highly available seed node set, and finally use the user social relationship graph to perform the two-way propagation trust scoring, so that the low trusted user has a lower trusted score and thus identifies the malicious user. The related experiments verify the effectiveness of the trustworthiness-based user credibility evaluation algorithm in the evaluation of Microblog user credibility.

Abeysekara, P., Dong, H., Qin, A. K..  2019.  Machine Learning-Driven Trust Prediction for MEC-Based IoT Services. 2019 IEEE International Conference on Web Services (ICWS). :188—192.

We propose a distributed machine-learning architecture to predict trustworthiness of sensor services in Mobile Edge Computing (MEC) based Internet of Things (IoT) services, which aligns well with the goals of MEC and requirements of modern IoT systems. The proposed machine-learning architecture models training a distributed trust prediction model over a topology of MEC-environments as a Network Lasso problem, which allows simultaneous clustering and optimization on large-scale networked-graphs. We then attempt to solve it using Alternate Direction Method of Multipliers (ADMM) in a way that makes it suitable for MEC-based IoT systems. We present analytical and simulation results to show the validity and efficiency of the proposed solution.

Niz, D. de, Andersson, B., Klein, M., Lehoczky, J., Vasudevan, A., Kim, H., Moreno, G..  2019.  Mixed-Trust Computing for Real-Time Systems. 2019 IEEE 25th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA). :1—11.

Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To address these challenges, this paper proposes a real-time mixed-trust computing framework that combines verification and protection. The framework introduces a new task model, where an application task can have both an untrusted and a trusted part. The untrusted part allows complex computations supported by a full OS with a realtime scheduler running in a VM hosted by a trusted hypervisor. The trusted part is executed by another scheduler within the hypervisor and is thus protected from the untrusted part. If the untrusted part fails to finish by a specific time, the trusted part is activated to preserve safety (e.g., prevent a crash) including its timing guarantees. This framework is the first allowing the use of untrusted components for CPS critical functions while preserving logical and timing guarantees, even in the presence of malicious attackers. We present the framework design and implementation along with the schedulability analysis and the coordination protocol between the trusted and untrusted parts. We also present our Raspberry Pi 3 implementation along with experiments showing the behavior of the system under failures of untrusted components, and a drone application to demonstrate its practicality.

Malvankar, A., Payne, J., Budhraja, K. K., Kundu, A., Chari, S., Mohania, M..  2019.  Malware Containment in Cloud. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :221—227.

Malware is pervasive and poses serious threats to normal operation of business processes in cloud. Cloud computing environments typically have hundreds of hosts that are connected to each other, often with high risk trust assumptions and/or protection mechanisms that are not difficult to break. Malware often exploits such weaknesses, as its immediate goal is often to spread itself to as many hosts as possible. Detecting this propagation is often difficult to address because the malware may reside in multiple components across the software or hardware stack. In this scenario, it is usually best to contain the malware to the smallest possible number of hosts, and it's also critical for system administration to resolve the issue in a timely manner. Furthermore, resolution often requires that several participants across different organizational teams scramble together to address the intrusion. In this vision paper, we define this problem in detail. We then present our vision of decentralized malware containment and the challenges and issues associated with this vision. The approach of containment involves detection and response using graph analytics coupled with a blockchain framework. We propose the use of a dominance frontier for profile nodes which must be involved in the containment process. Smart contracts are used to obtain consensus amongst the involved parties. The paper presents a basic implementation of this proposal. We have further discussed some open problems related to our vision.

2020-12-01
Craggs, B., Rashid, A..  2019.  Trust Beyond Computation Alone: Human Aspects of Trust in Blockchain Technologies. 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS). :21—30.

Blockchains - with their inherent properties of transaction transparency, distributed consensus, immutability and cryptographic verifiability - are increasingly seen as a means to underpin innovative products and services in a range of sectors from finance through to energy and healthcare. Discussions, too often, make assertions that the trustless nature of blockchain technologies enables and actively promotes their suitability - there being no need to trust third parties or centralised control. Yet humans need to be able to trust systems, and others with whom the system enables transactions. In this paper, we highlight that understanding this need for trust is critical for the development of blockchain-based systems. Through an online study with 125 users of the most well-known of blockchain based systems - the cryptocurrency Bitcoin - we uncover that human and institutional aspects of trust are pervasive. Our analysis highlights that, when designing future blockchain-based technologies, we ought to not only consider computational trust but also the wider eco-system, how trust plays a part in users engaging/disengaging with such eco-systems and where design choices impact upon trust. From this, we distill a set of guidelines for software engineers developing blockchain-based systems for societal applications.

Sun, P., Yin, S., Man, W., Tao, T..  2018.  Research of Personalized Recommendation Algorithm Based on Trust and User's Interest. 2018 International Conference on Robots Intelligent System (ICRIS). :153—156.

Most traditional recommendation algorithms only consider the binary relationship between users and projects, these can basically be converted into score prediction problems. But most of these algorithms ignore the users's interests, potential work factors or the other social factors of the recommending products. In this paper, based on the existing trustworthyness model and similarity measure, we puts forward the concept of trust similarity and design a joint interest-content recommendation framework to suggest users which videos to watch in the online video site. In this framework, we first analyze the user's viewing history records, tags and establish the user's interest characteristic vector. Then, based on the updated vector, users should be clustered by sparse subspace clust algorithm, which can improve the efficiency of the algorithm. We certainly improve the calculation of similarity to help users find better neighbors. Finally we conduct experiments using real traces from Tencent Weibo and Youku to verify our method and evaluate its performance. The results demonstrate the effectiveness of our approach and show that our approach can substantially improve the recommendation accuracy.

Attia, M., Hossny, M., Nahavandi, S., Dalvand, M., Asadi, H..  2018.  Towards Trusted Autonomous Surgical Robots. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4083—4088.

Throughout the last few decades, a breakthrough took place in the field of autonomous robotics. They have been introduced to perform dangerous, dirty, difficult, and dull tasks, to serve the community. They have been also used to address health-care related tasks, such as enhancing the surgical skills of the surgeons and enabling surgeries in remote areas. This may help to perform operations in remote areas efficiently and in timely manner, with or without human intervention. One of the main advantages is that robots are not affected with human-related problems such as: fatigue or momentary lapses of attention. Thus, they can perform repeated and tedious operations. In this paper, we propose a framework to establish trust in autonomous medical robots based on mutual understanding and transparency in decision making.

Byrne, K., Marín, C..  2018.  Human Trust in Robots When Performing a Service. 2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). :9—14.

The presence of robots is becoming more apparent as technology progresses and the market focus transitions from smart phones to robotic personal assistants such as those provided by Amazon and Google. The integration of robots in our societies is an inevitable tendency in which robots in many forms and with many functionalities will provide services to humans. This calls for an understanding of how humans are affected by both the presence of and the reliance on robots to perform services for them. In this paper we explore the effects that robots have on humans when a service is performed on request. We expose three groups of human participants to three levels of service completion performed by robots. We record and analyse human perceptions such as propensity to trust, competency, responsiveness, sociability, and team work ability. Our results demonstrate that humans tend to trust robots and are more willing to interact with them when they autonomously recover from failure by requesting help from other robots to fulfil their service. This supports the view that autonomy and team working capabilities must be brought into robots in an effort to strengthen trust in robots performing a service.

Herse, S., Vitale, J., Tonkin, M., Ebrahimian, D., Ojha, S., Johnston, B., Judge, W., Williams, M..  2018.  Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :7—14.

When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.

Xie, Y., Bodala, I. P., Ong, D. C., Hsu, D., Soh, H..  2019.  Robot Capability and Intention in Trust-Based Decisions Across Tasks. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :39—47.

In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.

Nielsen, C., Mathiesen, M., Nielsen, J., Jensen, L. C..  2019.  Changes in Heart Rate and Feeling of Safety When Led by a Rehabilitation Robot. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :580—581.

Trust is an important topic in medical human-robot interaction, since patients may be more fragile than other groups of people. This paper investigates the issue of users' trust when interacting with a rehabilitation robot. In the study, we investigate participants' heart rate and perception of safety in a scenario when their arm is led by the rehabilitation robot in two types of exercises at three different velocities. The participants' heart rate are measured during each exercise and the participants are asked how safe they feel after each exercise. The results showed that velocity and type of exercise has no significant influence on the participants' heart rate, but they do have significant influence on how safe they feel. We found that increasing velocity and longer exercises negatively influence participants' perception of safety.

Haider, C., Chebotarev, Y., Tsiourti, C., Vincze, M..  2019.  Effects of Task-Dependent Robot Errors on Trust in Human-Robot Interaction: A Pilot Study. 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :172—177.

The growing diffusion of robotics in our daily life demands a deeper understanding of the mechanisms of trust in human-robot interaction. The performance of a robot is one of the most important factors influencing the trust of a human user. However, it is still unclear whether the circumstances in which a robot fails to affect the user's trust. We investigate how the perception of robot failures may influence the willingness of people to cooperate with the robot by following its instructions in a time-critical task. We conducted an experiment in which participants interacted with a robot that had previously failed in a related or an unrelated task. We hypothesized that users' observed and self-reported trust ratings would be higher in the condition where the robot has previously failed in an unrelated task. A proof-of-concept study with nine participants timidly confirms our hypothesis. At the same time, our results reveal some flaws in the design experimental, and encourage a future large scale study.

Sebo, S. S., Krishnamurthi, P., Scassellati, B..  2019.  “I Don't Believe You”: Investigating the Effects of Robot Trust Violation and Repair. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :57—65.

When a robot breaks a person's trust by making a mistake or failing, continued interaction will depend heavily on how the robot repairs the trust that was broken. Prior work in psychology has demonstrated that both the trust violation framing and the trust repair strategy influence how effectively trust can be restored. We investigate trust repair between a human and a robot in the context of a competitive game, where a robot tries to restore a human's trust after a broken promise, using either a competence or integrity trust violation framing and either an apology or denial trust repair strategy. Results from a 2×2 between-subjects study ( n=82) show that participants interacting with a robot employing the integrity trust violation framing and the denial trust repair strategy are significantly more likely to exhibit behavioral retaliation toward the robot. In the Dyadic Trust Scale survey, an interaction between trust violation framing and trust repair strategy was observed. Our results demonstrate the importance of considering both trust violation framing and trust repair strategy choice when designing robots to repair trust. We also discuss the influence of human-to-robot promises and ethical considerations when framing and repairing trust between a human and robot.

Geiskkovitch, D. Y., Thiessen, R., Young, J. E., Glenwright, M. R..  2019.  What? That's Not a Chair!: How Robot Informational Errors Affect Children's Trust Towards Robots 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :48—56.

Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3-5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.

Ullman, D., Malle, B. F..  2019.  Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :618—619.

Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.