Visible to the public Biblio

Filters: Keyword is human trust  [Clear All Filters]
2018-02-14
Filip, G., Meng, X., Burnett, G., Harvey, C..  2017.  Human factors considerations for cooperative positioning using positioning, navigational and sensor feedback to calibrate trust in CAVs. 2017 Forum on Cooperative Positioning and Service (CPGPS \#65289;. :134–139.

Given the complexities involved in the sensing, navigational and positioning environment on board automated vehicles we conduct an exploratory survey and identify factors capable of influencing the users' trust in such system. After the analysis of the survey data, the Situational Awareness of the Vehicle (SAV) emerges as an important factor capable of influencing the trust of the users. We follow up on that by conducting semi-structured interviews with 12 experts in the CAV field, focusing on the importance of the SAV, on the factors that are most important when talking about it as well as the need to keep the users informed regarding its status. We conclude that in the context of Connected and Automated Vehicles (CAVs), the importance of the SAV can now be expanded beyond its technical necessity of making vehicles function to a human factors area: calibrating users' trust.

Nam, C., Walker, P., Lewis, M., Sycara, K..  2017.  Predicting trust in human control of swarms via inverse reinforcement learning. 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :528–533.
In this paper, we study the model of human trust where an operator controls a robotic swarm remotely for a search mission. Existing trust models in human-in-the-loop systems are based on task performance of robots. However, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since task performance of swarms is not clearly perceivable by humans. We formulate trust as a Markov decision process whose state space includes physical parameters of the swarm. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from a single demonstration. The learned behaviors are used to predict the trust level of the operator based on the features of the swarm.
2017-05-16
Pearson, Carl J., Welk, Allaire K., Boettcher, William A., Mayer, Roger C., Streck, Sean, Simons-Rudolph, Joseph M., Mayhorn, Christopher B..  2016.  Differences in Trust Between Human and Automated Decision Aids. Proceedings of the Symposium and Bootcamp on the Science of Security. :95–98.

Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.

Lucas, Gale, Stratou, Giota, Lieblich, Shari, Gratch, Jonathan.  2016.  Trust Me: Multimodal Signals of Trustworthiness. Proceedings of the 18th ACM International Conference on Multimodal Interaction. :5–12.

This paper builds on prior psychological studies that identify signals of trustworthiness between two human negotiators. Unlike prior work, the current work tracks such signals automatically and fuses them into computational models that predict trustworthiness. To achieve this goal, we apply automatic trackers to recordings of human dyads negotiating in a multi-issue bargaining task. We identify behavioral indicators in different modalities (facial expressions, gestures, gaze, and conversational features) that are predictive of trustworthiness. We predict both objective trustworthiness (i.e., are they honest) and perceived trustworthiness (i.e., do they seem honest to their interaction partner). Our experiments show that people are poor judges of objective trustworthiness (i.e., objective and perceived trustworthiness are predicted by different indicators), and that multimodal approaches better predict objective trustworthiness, whereas people overly rely on facial expressions when judging the honesty of their partner. Moreover, domain knowledge (from the literature and prior analysis of behaviors) facilitates the model development process.

Robert, Jr., Lionel P..  2016.  Monitoring and Trust in Virtual Teams. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing. :245–259.

This study was conducted to determine whether monitoring moderated the impact of trust on the project performance of 57 virtual teams. Two sources of monitoring were examined: internal monitoring done by team members and external monitoring done by someone outside of the team. Two types of trust were also examined: affective-based trust, or trust based on emotion; and cognitive trust, or trust based on competency. Results indicate that when internal monitoring was high, affective trust was associated with increases in performance. However, affective trust was associated with decreases in performance when external monitoring was high. Both types of monitoring reduced the strong positive relationship between cognitive trust and the performance of virtual teams. Results of this study provide new insights about monitoring and trust in virtual teams and inform both theory and design.

Kizilcec, René F..  2016.  How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :2390–2395.

The rising prevalence of algorithmic interfaces, such as curated feeds in online news, raises new questions for designers, scholars, and critics of media. This work focuses on how transparent design of algorithmic interfaces can promote awareness and foster trust. A two-stage process of how transparency affects trust was hypothesized drawing on theories of information processing and procedural justice. In an online field experiment, three levels of system transparency were tested in the high-stakes context of peer assessment. Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust. Attitudes of individuals whose expectations were met did not vary with transparency. Results are discussed in terms of a dual process model of attitude change and the depth of justification of perceived inconsistency. Designing for trust requires balanced interface transparency - not too little and not too much.

Sänger, Johannes, Hänsch, Norman, Glass, Brian, Benenson, Zinaida, Landwirth, Robert, Sasse, M. Angela.  2016.  Look Before You Leap: Improving the Users' Ability to Detect Fraud in Electronic Marketplaces. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. :3870–3882.

Reputation systems in current electronic marketplaces can easily be manipulated by malicious sellers in order to appear more reputable than appropriate. We conducted a controlled experiment with 40 UK and 41 German participants on their ability to detect malicious behavior by means of an eBay-like feedback profile versus a novel interface involving an interactive visualization of reputation data. The results show that participants using the new interface could better detect and understand malicious behavior in three out of four attacks (the overall detection accuracy 77% in the new vs. 56% in the old interface). Moreover, with the new interface, only 7% of the users decided to buy from the malicious seller (the options being to buy from one of the available sellers or to abstain from buying), as opposed to 30% in the old interface condition.

Calefato, Fabio, Lanubile, Filippo.  2016.  Affective Trust As a Predictor of Successful Collaboration in Distributed Software Projects. Proceedings of the 1st International Workshop on Emotion Awareness in Software Engineering. :3–5.

Building trust among remote developers is challenging because trust typically grows through close face-to-face interaction. In this paper, we present the preparatory design of an empirical study aimed to assess whether affective trust, established through social communication between developers, is a predictor of successful collaboration in distributed projects. Specifically, we intend to measure affective trust through sentiment analysis of pull-request comments.

Jang, Min-Hee, Kim, Sang-Wook, Ha, Jiwoon.  2016.  Effectiveness of Reverse Edges and Uncertainty in PIN-TRUST for Trust Prediction. Proceedings of the Sixth International Conference on Emerging Databases: Technologies, Applications, and Theory. :81–85.

Recently, PIN-TRUST, a method to predict future trust relationships between users is proposed. PIN-TRUST out-performs existing trust prediction methods by exploiting all types of interactions between users and the reciprocation of ones. In this paper, we validate whether its consideration on the reciprocation of interactions is really effective in trust prediction. Furthermore, we consider a new concept, the "uncertainty" of untrustworthy users that is devised to reflect the difficulty on modeling the activities of untrustworthy users in PIN-TRUST. Then, we also validate the effectiveness this uncertainty concepts. Through the validation, we reveal that the consideration of the reciprocation of interactions is effective for trust prediction with PIN-TRUST, and it is necessary to regard the uncertainty of untrustworthy users same as that of other users.

Conway, Dan, Chen, Fang, Yu, Kun, Zhou, Jianlong, Morris, Richard.  2016.  Misplaced Trust: A Bias in Human-Machine Trust Attribution – In Contradiction to Learning Theory. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. :3035–3041.

Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.

Depping, Ansgar E., Mandryk, Regan L., Johanson, Colby, Bowey, Jason T., Thomson, Shelby C..  2016.  Trust Me: Social Games Are Better Than Social Icebreakers at Building Trust. Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. :116–129.

Interpersonal trust is one of the key components of efficient teamwork. Research suggests two main approaches for trust formation: personal information exchange (e.g., social icebreakers), and creating a context of risk and interdependence (e.g., trust falls). However, because these strategies are difficult to implement in an online setting, trust is more difficult to achieve and preserve in distributed teams. In this paper, we argue that games are an optimal environment for trust formation because they can simulate both risk and interdependence. Results of our online experiment show that a social game can be more effective than a social task at fostering interpersonal trust. Furthermore, trust formation through the game is reliable, but trust depends on several contingencies in the social task. Our work suggests that gameplay interactions do not merely promote impoverished versions of the rich ties formed through conversation; but rather engender genuine social bonds. \textbackslash

Worthy, Peter, Matthews, Ben, Viller, Stephen.  2016.  Trust Me: Doubts and Concerns Living with the Internet of Things. Proceedings of the 2016 ACM Conference on Designing Interactive Systems. :427–434.

An increasing number of everyday objects are now connected to the internet, collecting and sharing information about us: the "Internet of Things" (IoT). However, as the number of "social" objects increases, human concerns arising from this connected world are starting to become apparent. This paper presents the results of a preliminary qualitative study in which five participants lived with an ambiguous IoT device that collected and shared data about their activities at home for a week. In analyzing this data, we identify the nature of human and socio-technical concerns that arise when living with IoT technologies. Trust is identified as a critical factor - as trust in the entity/ies that are able to use their collected information decreases, users are likely to demand greater control over information collection. Addressing these concerns may support greater engagement of users with IoT technology. The paper concludes with a discussion of how IoT systems might be designed to better foster trust with their owners.