Visible to the public Biblio

Filters: Keyword is human trust  [Clear All Filters]
2020-12-01
Tanana, D..  2019.  Decentralized Labor Record System Based on Wavelet Consensus Protocol. 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON). :0496—0499.

The labor market involves several untrusted actors with contradicting objectives. We propose a blockchain based system for labor market, which provides benefits to all participants in terms of confidence, transparency, trust and tracking. Our system would handle employment data through new Wavelet blockchain platform. It would change the job market enabling direct agreements between parties without other participants, and providing new mechanisms for negotiating the employment conditions. Furthermore, our system would reduce the need in existing paper workflow as well as in major internet recruiting companies. The key differences of our work from other blockchain based labor record systems are usage of Wavelet blockchain platform, which features metastability, directed acyclic graph system and Turing complete smart contracts platform and introduction of human interaction inside the smart contracts logic, instead of automatic execution of contracts. The results are promising while inconclusive and we would further explore potential of blockchain solutions for labor market problems.

Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N..  2019.  DeepRing: Protecting Deep Neural Network With Blockchain. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2821—2828.

Several computer vision applications such as object detection and face recognition have started to completely rely on deep learning based architectures. These architectures, when paired with appropriate loss functions and optimizers, produce state-of-the-art results in a myriad of problems. On the other hand, with the advent of "blockchain", the cybersecurity industry has developed a new sense of trust which was earlier missing from both the technical and commercial perspectives. Employment of cryptographic hash as well as symmetric/asymmetric encryption and decryption algorithms ensure security without any human intervention (i.e., centralized authority). In this research, we present the synergy between the best of both these worlds. We first propose a model which uses the learned parameters of a typical deep neural network and is secured from external adversaries by cryptography and blockchain technology. As the second contribution of the proposed research, a new parameter tampering attack is proposed to properly justify the role of blockchain in machine learning.

Apau, M. N., Sedek, M., Ahmad, R..  2019.  A Theoretical Review: Risk Mitigation Through Trusted Human Framework for Insider Threats. 2019 International Conference on Cybersecurity (ICoCSec). :37—42.

This paper discusses the possible effort to mitigate insider threats risk and aim to inspire organizations to consider identifying insider threats as one of the risks in the company's enterprise risk management activities. The paper suggests Trusted Human Framework (THF) as the on-going and cyclic process to detect and deter potential employees who bound to become the fraudster or perpetrator violating the access and trust given. The mitigation's control statements were derived from the recommended practices in the “Common Sense Guide to Mitigating Insider Threats” produced by the Software Engineering Institute, Carnegie Mellon University (SEI-CMU). The statements validated via a survey which was responded by fifty respondents who work in Malaysia.

Craggs, B., Rashid, A..  2019.  Trust Beyond Computation Alone: Human Aspects of Trust in Blockchain Technologies. 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS). :21—30.

Blockchains - with their inherent properties of transaction transparency, distributed consensus, immutability and cryptographic verifiability - are increasingly seen as a means to underpin innovative products and services in a range of sectors from finance through to energy and healthcare. Discussions, too often, make assertions that the trustless nature of blockchain technologies enables and actively promotes their suitability - there being no need to trust third parties or centralised control. Yet humans need to be able to trust systems, and others with whom the system enables transactions. In this paper, we highlight that understanding this need for trust is critical for the development of blockchain-based systems. Through an online study with 125 users of the most well-known of blockchain based systems - the cryptocurrency Bitcoin - we uncover that human and institutional aspects of trust are pervasive. Our analysis highlights that, when designing future blockchain-based technologies, we ought to not only consider computational trust but also the wider eco-system, how trust plays a part in users engaging/disengaging with such eco-systems and where design choices impact upon trust. From this, we distill a set of guidelines for software engineers developing blockchain-based systems for societal applications.

Gao, Y., Sibirtseva, E., Castellano, G., Kragic, D..  2019.  Fast Adaptation with Meta-Reinforcement Learning for Trust Modelling in Human-Robot Interaction. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :305—312.

In socially assistive robotics, an important research area is the development of adaptation techniques and their effect on human-robot interaction. We present a meta-learning based policy gradient method for addressing the problem of adaptation in human-robot interaction and also investigate its role as a mechanism for trust modelling. By building an escape room scenario in mixed reality with a robot, we test our hypothesis that bi-directional trust can be influenced by different adaptation algorithms. We found that our proposed model increased the perceived trustworthiness of the robot and influenced the dynamics of gaining human's trust. Additionally, participants evaluated that the robot perceived them as more trustworthy during the interactions with the meta-learning based adaptation compared to the previously studied statistical adaptation model.

Losey, D. P., Sadigh, D..  2019.  Robots that Take Advantage of Human Trust. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :7001—7008.

Humans often assume that robots are rational. We believe robots take optimal actions given their objective; hence, when we are uncertain about what the robot's objective is, we interpret the robot's actions as optimal with respect to our estimate of its objective. This approach makes sense when robots straightforwardly optimize their objective, and enables humans to learn what the robot is trying to achieve. However, our insight is that-when robots are aware that humans learn by trusting that the robot actions are rational-intelligent robots do not act as the human expects; instead, they take advantage of the human's trust, and exploit this trust to more efficiently optimize their own objective. In this paper, we formally model instances of human-robot interaction (HRI) where the human does not know the robot's objective using a two-player game. We formulate different ways in which the robot can model the uncertain human, and compare solutions of this game when the robot has conservative, optimistic, rational, and trusting human models. In an offline linear-quadratic case study and a real-time user study, we show that trusting human models can naturally lead to communicative robot behavior, which influences end-users and increases their involvement.

2020-11-23
Li, W., Zhu, H., Zhou, X., Shimizu, S., Xin, M., Jin, Q..  2018.  A Novel Personalized Recommendation Algorithm Based on Trust Relevancy Degree. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :418–422.
The rapid development of the Internet and ecommerce has brought a lot of convenience to people's life. Personalized recommendation technology provides users with services that they may be interested according to users' information such as personal characteristics and historical behaviors. The research of personalized recommendation has been a hot point of data mining and social networks. In this paper, we focus on resolving the problem of data sparsity based on users' rating data and social network information, introduce a set of new measures for social trust and propose a novel personalized recommendation algorithm based on matrix factorization combining trust relevancy. Our experiments were performed on the Dianping datasets. The results show that our algorithm outperforms traditional approaches in terms of accuracy and stability.
Gao, Y., Li, X., Li, J., Gao, Y., Guo, N..  2018.  Graph Mining-based Trust Evaluation Mechanism with Multidimensional Features for Large-scale Heterogeneous Threat Intelligence. 2018 IEEE International Conference on Big Data (Big Data). :1272–1277.
More and more organizations and individuals start to pay attention to real-time threat intelligence to protect themselves from the complicated, organized, persistent and weaponized cyber attacks. However, most users worry about the trustworthiness of threat intelligence provided by TISPs (Threat Intelligence Sharing Platforms). The trust evaluation mechanism has become a hot topic in applications of TISPs. However, most current TISPs do not present any practical solution for trust evaluation of threat intelligence itself. In this paper, we propose a graph mining-based trust evaluation mechanism with multidimensional features for large-scale heterogeneous threat intelligence. This mechanism provides a feasible scheme and achieves the task of trust evaluation for TISP, through the integration of a trust-aware intelligence architecture model, a graph mining-based intelligence feature extraction method, and an automatic and interpretable trust evaluation algorithm. We implement this trust evaluation mechanism in a practical TISP (called GTTI), and evaluate the performance of our system on a real-world dataset from three popular cyber threat intelligence sharing platforms. Experimental results show that our mechanism can achieve 92.83% precision and 93.84% recall in trust evaluation. To the best of our knowledge, this work is the first to evaluate the trust level of heterogeneous threat intelligence automatically from the perspective of graph mining with multidimensional features including source, content, time, and feedback. Our work is beneficial to provide assistance on intelligence quality for the decision-making of human analysts, build a trust-aware threat intelligence sharing platform, and enhance the availability of heterogeneous threat intelligence to protect organizations against cyberspace attacks effectively.
Haddad, G. El, Aïmeur, E., Hage, H..  2018.  Understanding Trust, Privacy and Financial Fears in Online Payment. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :28–36.
In online payment, customers must transmit their personal and financial information through the website to conclude their purchase and pay the services or items selected. They may face possible fears from online transactions raised by their risk perception about financial or privacy loss. They may have concerns over the payment decision with the possible negative behaviors such as shopping cart abandonment. Therefore, customers have three major players that need to be addressed in online payment: the online seller, the payment page, and their own perception. However, few studies have explored these three players in an online purchasing environment. In this paper, we focus on the customer concerns and examine the antecedents of trust, payment security perception as well as their joint effect on two fundamentally important customers' aspects privacy concerns and financial fear perception. A total of 392 individuals participated in an online survey. The results highlight the importance, of the seller website's components (such as ease of use, security signs, and quality information) and their impact on the perceived payment security as well as their impact on customer's trust and financial fear perception. The objective of our study is to design a research model that explains the factors contributing to an online payment decision.
Sutton, A., Samavi, R., Doyle, T. E., Koff, D..  2018.  Digitized Trust in Human-in-the-Loop Health Research. 2018 16th Annual Conference on Privacy, Security and Trust (PST). :1–10.
In this paper, we propose an architecture that utilizes blockchain technology for enabling verifiable trust in collaborative health research environments. The architecture supports the human-in-the-loop paradigm for health research by establishing trust between participants, including human researchers and AI systems, by making all data transformations transparent and verifiable by all participants. We define the trustworthiness of the system and provide an analysis of the architecture in terms of trust requirements. We then evaluate our architecture by analyzing its resiliency to common security threats and through an experimental realization.
Gwak, B., Cho, J., Lee, D., Son, H..  2018.  TARAS: Trust-Aware Role-Based Access Control System in Public Internet-of-Things. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :74–85.
Due to the proliferation of Internet-of-Things (IoT) environments, humans working with heterogeneous, smart objects in public IoT environments become more popular than ever before. This situation often requires to establish trust relationships between a user and a smart object for their secure interactions, but without the presence of prior interactions. In this work, we are interested in how a smart object can grant an access right to a human user in the absence of any prior knowledge in which some users may be malicious aiming to breach security goals of the IoT system. To solve this problem, we propose a trust-aware, role-based access control system, namely TARAS, which provides adaptive authorization to users based on dynamic trust estimation. In TARAS, for the initial trust establishment, we take a multidisciplinary approach by adopting the concept of I-sharing from psychology. The I-sharing follows the rationale that people with similar roles and traits are more likely to respond in a similar way. This theory provides a powerful tool to quickly establish trust between a smart object and a new user with no prior interactions. In addition, TARAS can adaptively filter malicious users out by revoking their access rights based on adaptive, dynamic trust estimation. Our experimental results show that the proposed TARAS mechanism can maximize system integrity in terms of correctly detecting malicious or benign users while maximizing service availability to users particularly when the system is fine-tuned based on the identified optimal setting in terms of an optimal trust threshold.
Wang, M., Hussein, A., Rojas, R. F., Shafi, K., Abbass, H. A..  2018.  EEG-Based Neural Correlates of Trust in Human-Autonomy Interaction. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). :350–357.
This paper aims at identifying the neural correlates of human trust in autonomous systems using electroencephalography (EEG) signals. Quantifying the relationship between trust and brain activities allows for real-time assessment of human trust in automation. This line of effort contributes to the design of trusted autonomous systems, and more generally, modeling the interaction in human-autonomy interaction. To study the correlates of trust, we use an investment game in which artificial agents with different levels of trustworthiness are employed. We collected EEG signals from 10 human subjects while they are playing the game; then computed three types of features from these signals considering the signal time-dependency, complexity and power spectrum using an autoregressive model (AR), sample entropy and Fourier analysis, respectively. Results of a mixed model analysis showed significant correlation between human trust and EEG features from certain electrodes. The frontal and the occipital area are identified as the predominant brain areas correlated with trust.
Tagliaferri, M., Aldini, A..  2018.  A Trust Logic for Pre-Trust Computations. 2018 21st International Conference on Information Fusion (FUSION). :2006–2012.
Computational trust is the digital counterpart of the human notion of trust as applied in social systems. Its main purpose is to improve the reliability of interactions in online communities and of knowledge transfer in information management systems. Trust models are formal frameworks in which the notion of computational trust is described rigorously and where its dynamics are explained precisely. In this paper we will consider and extend a computational trust model, i.e., JØsang's Subjective Logic: we will show how this model is well-suited to describe the dynamics of computational trust, but lacks effective tools to compute initial trust values to feed in the model. To overcome some of the issues with subjective logic, we will introduce a logical language which can be employed to describe and reason about trust. The core ideas behind the logical language will turn out to be useful in computing initial trust values to feed into subjective logic. The aim of the paper is, therefore, that of providing an improvement on subjective logic.
Alruwaythi, M., Kambampaty, K., Nygard, K..  2018.  User Behavior Trust Modeling in Cloud Security. 2018 International Conference on Computational Science and Computational Intelligence (CSCI). :1336–1339.
Evaluating user behavior in cloud computing infrastructure is important for both Cloud Users and Cloud Service Providers. The service providers must ensure the safety of users who access the cloud. User behavior can be modeled and employed to help assess trust and play a role in ensuring authenticity and safety of the user. In this paper, we propose a User Behavior Trust Model based on Fuzzy Logic (UBTMFL). In this model, we develop user history patterns and compare them current user behavior. The outcome of the comparison is sent to a trust computation center to calculate a user trust value. This model considers three types of trust: direct, history and comprehensive. Simulation results are included.
Ma, S..  2018.  Towards Effective Genetic Trust Evaluation in Open Network. 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :563–569.
In open network environments, since there is no centralized authority to monitor misbehaving entities, malicious entities can easily cause the degradation of the service quality. Trust has become an important factor to ensure network security, which can help entities to distinguish good partners from bad ones. In this paper, trust in open network environment is regarded as a self-organizing system, using self-organization principle of human social trust propagation, a genetic trust evaluation method with self-optimization and family attributes is proposed. In this method, factors of trust evaluation include time, IP, behavior feedback and intuitive trust. Data structure of access record table and trust record table are designed to store the relationship between ancestor nodes and descendant nodes. A genetic trust search algorithm is designed by simulating the biological evolution process. Based on trust information of the current node's ancestors, heuristics generate randomly chromosome populations, whose structure includes time, IP address, behavior feedback and intuitive trust. Then crossover and mutation strategy is used to make the population evolutionary searching. According to the genetic searching termination condition, the optimal trust chromosome in the population is selected, and trust value of the chromosome is computed, which is the node's genetic trust evaluation result. The simulation result shows that the genetic trust evaluation method is effective, and trust evaluation process of the current node can be regarded as the process of searching for optimal trust results from the ancestor nodes' information. With increasing of ancestor nodes' genetic trust information, the trust evaluation result from genetic algorithm searching is more accurate, which can effectively solve the joint fraud problem.
2020-10-05
Kang, Anqi.  2018.  Collaborative Filtering Algorithm Based on Trust and Information Entropy. 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS). 3:262—266.

In order to improve the accuracy of similarity, an improved collaborative filtering algorithm based on trust and information entropy is proposed in this paper. Firstly, the direct trust between the users is determined by the user's rating to explore the potential trust relationship of the users. The time decay function is introduced to realize the dynamic portrayal of the user's interest decays over time. Secondly, the direct trust and the indirect trust are combined to obtain the overall trust which is weighted with the Pearson similarity to obtain the trust similarity. Then, the information entropy theory is introduced to calculate the similarity based on weighted information entropy. At last, the trust similarity and the similarity based on weighted information entropy are weighted to obtain the similarity combing trust and information entropy which is used to predicted the rating of the target user and create the recommendation. The simulation shows that the improved algorithm has a higher accuracy of recommendation and can provide more accurate and reliable recommendation service.

2019-08-05
Ma, S., Zeng, S., Guo, J..  2018.  Research on Trust Degree Model of Fault Alarms Based on Neural Network. 2018 12th International Conference on Reliability, Maintainability, and Safety (ICRMS). :73-77.

False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.

2018-02-14
Kauffmann, David, Carmi, Golan.  2017.  E-collaboration of Virtual Teams: The Mediating Effect of Interpersonal Trust. Proceedings of the 2017 International Conference on E-Business and Internet. :45–49.
This study examines the relationship between task communication and relationship communication, and collaboration by exploring the mediating effect of interpersonal trust in a virtual team environment. A theoretical model was developed to examine this relationship where cognitive trust and affective trust are defined as mediation variables between communication and collaboration. The main results of this study show that firstly, there is a significant correlation with a large effect size between communication, trust, and collaboration. Secondly, interpersonal trust plays an important role as a mediator in the relationship between communication and collaboration, especially in relationship communication within virtual teams.
Wang, Frank, Joung, Yuna, Mickens, James.  2017.  Cobweb: Practical Remote Attestation Using Contextual Graphs. Proceedings of the 2Nd Workshop on System Software for Trusted Execution. :3:1–3:7.

In theory, remote attestation is a powerful primitive for building distributed systems atop untrusting peers. Unfortunately, the canonical attestation framework defined by the Trusted Computing Group is insufficient to express rich contextual relationships between client-side software components. Thus, attestors and verifiers must rely on ad-hoc mechanisms to handle real-world attestation challenges like attestors that load executables in nondeterministic orders, or verifiers that require attestors to track dynamic information flows between attestor-side components. In this paper, we survey these practical attestation challenges. We then describe a new attestation framework, named Cobweb, which handles these challenges. The key insight is that real-world attestation is a graph problem. An attestation message is a graph in which each vertex is a software component, and has one or more labels, e.g., the hash value of the component, or the raw file data, or a signature over that data. Each edge in an attestation graph is a contextual relationship, like the passage of time, or a parent/child fork() relationship, or a sender/receiver IPC relationship. Cobweb's verifier-side policies are graph predicates which analyze contextual relationships. Experiments with real, complex software stacks demonstrate that Cobweb's abstractions are generic and can support a variety of real-world policies.

Jayasinghe, Upul, Lee, Hyun-Woo, Lee, Gyu Myoung.  2017.  A Computational Model to Evaluate Honesty in Social Internet of Things. Proceedings of the Symposium on Applied Computing. :1830–1835.
Trust in Social Internet of Things has allowed to open new horizons in collaborative networking, particularly by allowing objects to communicate with their service providers, based on their relationships analogy to human world. However, strengthening trust is a challenging task as it involves identifying several influential factors in each domain of social-cyber-physical systems in order to build a reliable system. In this paper, we address the issue of understanding and evaluating honesty that is an important trust metric in trustworthiness evaluation process in social networks. First, we identify and define several trust attributes, which affect directly to the honesty. Then, a subjective computational model is derived based on experiences of objects and opinions from friendly objects with respect to identified attributes. Based on the outputs of this model a final honest level is predicted using regression analysis. Finally, the effectiveness of our model is tested using simulations.
Tokushige, Hiroyuki, Narumi, Takuji, Ono, Sayaka, Fuwamoto, Yoshitaka, Tanikawa, Tomohiro, Hirose, Michitaka.  2017.  Trust Lengthens Decision Time on Unexpected Recommendations in Human-agent Interaction. Proceedings of the 5th International Conference on Human Agent Interaction. :245–252.
As intelligent agents learn to behave increasingly autonomously and simulate a high level of intelligence, human interaction with them will be increasingly unpredictable. Would you accept an unexpected and sometimes irrational but actually correct recommendation by an agent you trust? We performed two experiments in which participants played a game. In this game, the participants chose a path by referring to a recommendation from the agent in one of two experimental conditions:the correct or the faulty condition. After interactions with the agent, the participants received an unexpected recommendation by the agent. The results showed that, while the trust measured by a questionnaire in the correct condition was higher than that in the faulty condition, there was no significant difference in the number of people who accepted the recommendation. Furthermore, the trust in the agent made decision time significantly longer when the recommendation was not rational.
Merchant, Arpit, Singh, Navjyoti.  2017.  Hybrid Trust-Aware Model for Personalized Top-N Recommendation. Proceedings of the Fourth ACM IKDD Conferences on Data Sciences. :4:1–4:5.

Due to the large quantity and diversity of content being easily available to users, recommender systems (RS) have become an integral part of nearly every online system. They allow users to resolve the information overload problem by proactively generating high-quality personalized recommendations. Trust metrics help leverage preferences of similar users and have led to improved predictive accuracy which is why they have become an important consideration in the design of RSs. We argue that there are additional aspects of trust as a human notion, that can be integrated with collaborative filtering techniques to suggest to users items that they might like. In this paper, we present an approach for the top-N recommendation task that computes prediction scores for items as a user specific combination of global and local trust models to capture differences in preferences. Our experiments show that the proposed method improves upon the standard trust model and outperforms competing top-N recommendation approaches on real world data by upto 19%.

Ruan, Yefeng, Zhang, Ping, Alfantoukh, Lina, Durresi, Arjan.  2017.  Measurement Theory-Based Trust Management Framework for Online Social Communities. ACM Trans. Internet Technol.. 17:16:1–16:24.
We propose a trust management framework based on measurement theory to infer indirect trust in online social communities using trust’s transitivity property. Inspired by the similarities between human trust and measurement, we propose a new trust metric, composed of impression and confidence, which captures both trust level and its certainty. Furthermore, based on error propagation theory, we propose a method to compute indirect confidence according to different trust transitivity and aggregation operators. We perform experiments on two real data sets, Epinions.com and Twitter, to validate our framework. Also, we show that inferring indirect trust can connect more pairs of users.
Kulyk, O., Reinheimer, B. M., Gerber, P., Volk, F., Volkamer, M., Mühlhäuser, M..  2017.  Advancing Trust Visualisations for Wider Applicability and User Acceptance. 2017 IEEE Trustcom/BigDataSE/ICESS. :562–569.
There are only a few visualisations targeting the communication of trust statements. Even though there are some advanced and scientifically founded visualisations-like, for example, the opinion triangle, the human trust interface, and T-Viz-the stars interface known from e-commerce platforms is by far the most common one. In this paper, we propose two trust visualisations based on T-Viz, which was recently proposed and successfully evaluated in large user studies. Despite being the most promising proposal, its design is not primarily based on findings from human-computer interaction or cognitive psychology. Our visualisations aim to integrate such findings and to potentially improve decision making in terms of correctness and efficiency. A large user study reveals that our proposed visualisations outperform T-Viz in these factors.
Gutzwiller, R. S., Reeder, J..  2017.  Human interactive machine learning for trust in teams of autonomous robots. 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA). :1–3.

Unmanned systems are increasing in number, while their manning requirements remain the same. To decrease manpower demands, machine learning techniques and autonomy are gaining traction and visibility. One barrier is human perception and understanding of autonomy. Machine learning techniques can result in “black box” algorithms that may yield high fitness, but poor comprehension by operators. However, Interactive Machine Learning (IML), a method to incorporate human input over the course of algorithm development by using neuro-evolutionary machine-learning techniques, may offer a solution. IML is evaluated here for its impact on developing autonomous team behaviors in an area search task. Initial findings show that IML-generated search plans were chosen over plans generated using a non-interactive ML technique, even though the participants trusted them slightly less. Further, participants discriminated each of the two types of plans from each other with a high degree of accuracy, suggesting the IML approach imparts behavioral characteristics into algorithms, making them more recognizable. Together the results lay the foundation for exploring how to team humans successfully with ML behavior.