Visible to the public Biblio

Filters: Keyword is human trust  [Clear All Filters]
2022-06-09
Manoj Vignesh, K M, Sujanani, Anish, Bangalore, Raghu A..  2021.  Modelling Trust Frameworks for Network-IDS. 2021 2nd International Conference for Emerging Technology (INCET). :1–5.
Though intrusion detection systems provide actionable alerts based on signature-based or anomaly-based traffic patterns, the majority of systems still rely on human analysts to identify and contain the root cause of security incidents. This process is naturally susceptible to human error and is time-consuming, which may allow for further enumeration and pivoting within a compromised environment. Through this paper, we have augmented traditional signature-based network intrusion detection systems with a trust framework whose reduction and redemption values are a function of the severity of the incident, the degree of connectivity of nodes and the time elapsed. A lightweight implementation on the nodes coupled with a multithreaded approach on the central trust server has shown the capability to scale to larger networks with high traffic volumes and a varying proportion of suspicious traffic patterns.
Adamik, Mark, Dudzinska, Karolina, Herskind, Adrian J., Rehm, Matthias.  2021.  The Difference Between Trust Measurement and Behavior: Investigating the Effect of Personalizing a Robot's Appearance on Trust in HRI. 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :880–885.
With the increased use of social robots in critical applications, like elder care and rehabilitation, it becomes necessary to investigate the user's trust in robots to prevent over- and under-utilization of the robotic systems. While several studies have shown how trust increases through personalised behaviour, there is a lack of research concerned with the influence of personalised physical appearance. This study explores the effect of personalised physical appearance on trust in human-robot-interaction (HRI). In an online game, 60 participants interacted with a robot, where half of the participants were asked to personalise the robot prior to the game. Trust was measured through a trust-related questionnaire as well as by evaluating user behaviour during the game. Results indicate that personalised physical appearance does not directly correlate to higher trust perceptions, however, there was significant evidence that players exhibit more trusting behaviours in a game against a personalised robot.
Yin, Weiru, Chai, Chen, Zhou, Ziyao, Li, Chenhao, Lu, Yali, Shi, Xiupeng.  2021.  Effects of trust in human-automation shared control: A human-in-the-loop driving simulation study. 2021 IEEE International Intelligent Transportation Systems Conference (ITSC). :1147–1154.
Human-automation shared control is proposed to reduce the risk of driver disengagement in Level-3 autonomous vehicles. Although previous studies have approved shared control strategy is effective to keep a driver in the loop and improve the driver's performance, over- and under-trust may affect the cooperation between the driver and the automation system. This study conducted a human-in-the-loop driving simulation experiment to assess the effects of trust on driver's behavior of shared control. An expert shared control strategy with longitudinal and lateral driving assistance was proposed and implemented in the experiment platform. Based on the experiment (N=24), trust in shared control was evaluated, followed by a correlation analysis of trust and behaviors. Moderating effects of trust on the relationship between gaze focalization and minimum of time to collision were then explored. Results showed that self-reported trust in shared control could be evaluated by three subscales respectively: safety, efficiency and ease of control, which all show stronger correlations with gaze focalization than other behaviors. Besides, with more trust in ease of control, there is a gentle decrease in the human-machine conflicts of mean brake inputs. The moderating effects show trust could enhance the decrease of minimum of time to collision as eyes-off-road time increases. These results indicate over-trust in automation will lead to unsafe behaviors, particularly monitoring behavior. This study contributes to revealing the link between trust and behavior in the context of human-automation shared control. It can be applied in improving the design of shared control and reducing risky behaviors of drivers by further trust calibration.
Luo, Ruijiao, Huang, Chao, Peng, Yuntao, Song, Boyi, Liu, Rui.  2021.  Repairing Human Trust by Promptly Correcting Robot Mistakes with An Attention Transfer Model. 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). :1928–1933.

In human-robot collaboration (HRC), human trust in the robot is the human expectation that a robot executes tasks with desired performance. A higher-level trust increases the willingness of a human operator to assign tasks, share plans, and reduce the interruption during robot executions, thereby facilitating human-robot integration both physically and mentally. However, due to real-world disturbances, robots inevitably make mistakes, decreasing human trust and further influencing collaboration. Trust is fragile and trust loss is triggered easily when robots show incapability of task executions, making the trust maintenance challenging. To maintain human trust, in this research, a trust repair framework is developed based on a human-to-robot attention transfer (H2R-AT) model and a user trust study. The rationale of this framework is that a prompt mistake correction restores human trust. With H2R-AT, a robot localizes human verbal concerns and makes prompt mistake corrections to avoid task failures in an early stage and to finally improve human trust. User trust study measures trust status before and after the behavior corrections to quantify the trust loss. Robot experiments were designed to cover four typical mistakes, wrong action, wrong region, wrong pose, and wrong spatial relation, validated the accuracy of H2R-AT in robot behavior corrections; a user trust study with 252 participants was conducted, and the changes in trust levels before and after corrections were evaluated. The effectiveness of the human trust repairing was evaluated by the mistake correction accuracy and the trust improvement.

Dekarske, Jason, Joshi, Sanjay S..  2021.  Human Trust of Autonomous Agent Varies With Strategy and Capability in Collaborative Grid Search Task. 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS). :1–6.
Trust is an important emerging area of study in human-robot cooperation. Many studies have begun to look at the issue of robot (agent) capability as a predictor of human trust in the robot. However, the assumption that agent capability is the sole predictor of human trust could underestimate the complexity of the problem. This study aims to investigate the effects of agent-strategy and agent-capability in a visual search task. Fourteen subjects were recruited to partake in a web-based grid search task. They were each paired with a series of autonomous agents to search an on-screen grid to find a number of outlier objects as quickly as possible. Both the human and agent searched the grid concurrently and the human was able to see the movement of the agent. Each trial, a different autonomous agent with its assigned capability, used one of three search strategies to assist their human counterpart. After each trial, the autonomous agent reported the number of outliers it found, and the human subject was asked to determine the total number of outliers in the area. Some autonomous agents reported only a fraction of the outliers they encountered, thus coding a varying level of agent capability. Human subjects then evaluated statements related to the behavior, reliability, and trust of the agent. The results showed increased measures of trust and reliability with increasing capability. Additionally, the most legible search strategies received the highest average ratings in a measure of familiarity. Remarkably, given no prior information about capabilities or strategies that they would see, subjects were able to determine consistent trustworthiness of the agent. Furthermore, both capability and strategy of the agent had statistically significant effects on the human’s trust in the agent.
Summerer, Christoph, Regnath, Emanuel, Ehm, Hans, Steinhorst, Sebastian.  2021.  Human-based Consensus for Trust Installation in Ontologies. 2021 IEEE International Conference on Blockchain and Cryptocurrency (ICBC). :1–3.
In this paper, we propose a novel protocol to represent the human factor on a blockchain environment. Our approach allows single or groups of humans to propose data in blocks which cannot be validated automatically but need human knowledge and collaboration to be validated. Only if human-based consensus on the correctness and trustworthiness of the data is reached, the new block is appended to the blockchain. This human approach significantly extends the possibilities of blockchain applications on data types apart from financial transaction data.
Dizaji, Lida Ghaemi, Hu, Yaoping.  2021.  Building And Measuring Trust In Human-Machine Systems. 2021 IEEE International Conference on Autonomous Systems (ICAS). :1–5.
In human-machine systems (HMS), trust placed by humans on machines is a complex concept and attracts increasingly research efforts. Herein, we reviewed recent studies on building and measuring trust in HMS. The review was based on one comprehensive model of trust – IMPACTS, which has 7 features of intention, measurability, performance, adaptivity, communication, transparency, and security. The review found that, in the past 5 years, HMS fulfill the features of intention, measurability, communication, and transparency. Most of the HMS consider the feature of performance. However, all of the HMS address rarely the feature of adaptivity and neglect the feature of security due to using stand-alone simulations. These findings indicate that future work considering the features of adaptivity and/or security is imperative to foster human trust in HMS.
Cohen, Myke C., Demir, Mustafa, Chiou, Erin K., Cooke, Nancy J..  2021.  The Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming. 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS). :1–6.
Trust in autonomous teammates has been shown to be a key factor in human-autonomy team (HAT) performance, and anthropomorphism is a closely related construct that is underexplored in HAT literature. This study investigates whether perceived anthropomorphism can be measured from team communication behaviors in a simulated remotely piloted aircraft system task environment, in which two humans in unique roles were asked to team with a synthetic (i.e., autonomous) pilot agent. We compared verbal and self-reported measures of anthropomorphism with team error handling performance and trust in the synthetic pilot. Results for this study show that trends in verbal anthropomorphism follow the same patterns expected from self-reported measures of anthropomorphism, with respect to fluctuations in trust resulting from autonomy failures.
Pang, Yijiang, Huang, Chao, Liu, Rui.  2021.  Synthesized Trust Learning from Limited Human Feedback for Human-Load-Reduced Multi-Robot Deployments. 2021 30th IEEE International Conference on Robot Human Interactive Communication (RO-MAN). :778–783.
Human multi-robot system (MRS) collaboration is demonstrating potentials in wide application scenarios due to the integration of human cognitive skills and a robot team’s powerful capability introduced by its multi-member structure. However, due to limited human cognitive capability, a human cannot simultaneously monitor multiple robots and identify the abnormal ones, largely limiting the efficiency of the human-MRS collaboration. There is an urgent need to proactively reduce unnecessary human engagements and further reduce human cognitive loads. Human trust in human MRS collaboration reveals human expectations on robot performance. Based on trust estimation, the work between a human and MRS will be reallocated that an MRS will self-monitor and only request human guidance in critical situations. Inspired by that, a novel Synthesized Trust Learning (STL) method was developed to model human trust in the collaboration. STL explores two aspects of human trust (trust level and trust preference), meanwhile accelerates the convergence speed by integrating active learning to reduce human workload. To validate the effectiveness of the method, tasks "searching victims in the context of city rescue" were designed in an open-world simulation environment, and a user study with 10 volunteers was conducted to generate real human trust feedback. The results showed that by maximally utilizing human feedback, the STL achieved higher accuracy in trust modeling with a few human feedback, effectively reducing human interventions needed for modeling an accurate trust, therefore reducing human cognitive load in the collaboration.
Hou, Ming.  2021.  Enabling Trust in Autonomous Human-Machine Teaming. 2021 IEEE International Conference on Autonomous Systems (ICAS). :1–1.
The advancement of AI enables the evolution of machines from relatively simple automation to completely autonomous systems that augment human capabilities with improved quality and productivity in work and life. The singularity is near! However, humans are still vulnerable. The COVID-19 pandemic reminds us of our limited knowledge about nature. The recent accidents involving Boeing 737 Max passengers ring the alarm again about the potential risks when using human-autonomy symbiosis technologies. A key challenge of safe and effective human-autonomy teaming is enabling “trust” between the human-machine team. It is even more challenging when we are facing insufficient data, incomplete information, indeterministic conditions, and inexhaustive solutions for uncertain actions. This calls for the imperative needs of appropriate design guidance and scientific methodologies for developing safety-critical autonomous systems and AI functions. The question is how to build and maintain a safe, effective, and trusted partnership between humans and autonomous systems. This talk discusses a context-based and interaction-centred design (ICD) approach for developing a safe and collaborative partnership between humans and technology by optimizing the interaction between human intelligence and AI. An associated trust model IMPACTS (Intention, Measurability, Performance, Adaptivity, Communications, Transparency, and Security) will also be introduced to enable the practitioners to foster an assured and calibrated trust relationship between humans and their partner autonomous systems. A real-world example of human-autonomy teaming in a military context will be explained to illustrate the utility and effectiveness of these trust enablers.
2022-02-03
Pang, Yijiang, Liu, Rui.  2021.  Trust-Aware Emergency Response for A Resilient Human-Swarm Cooperative System. 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :15—20.

A human-swarm cooperative system, which mixes multiple robots and a human supervisor to form a mission team, has been widely used for emergent scenarios such as criminal tracking and victim assistance. These scenarios are related to human safety and require a robot team to quickly transit from the current undergoing task into the new emergent task. This sudden mission change brings difficulty in robot motion adjustment and increases the risk of performance degradation of the swarm. Trust in human-human collaboration reflects a general expectation of the collaboration; based on the trust humans mutually adjust their behaviors for better teamwork. Inspired by this, in this research, a trust-aware reflective control (Trust-R), was developed for a robot swarm to understand the collaborative mission and calibrate its motions accordingly for better emergency response. Typical emergent tasks “transit between area inspection tasks”, “response to emergent target - car accident” in social security with eight fault-related situations were designed to simulate robot deployments. A human user study with 50 volunteers was conducted to model trust and assess swarm performance. Trust-R's effectiveness in supporting a robot team for emergency response was validated by improved task performance and increased trust scores.

2021-02-01
Calhoun, C. S., Reinhart, J., Alarcon, G. A., Capiola, A..  2020.  Establishing Trust in Binary Analysis in Software Development and Applications. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
The current exploratory study examined software programmer trust in binary analysis techniques used to evaluate and understand binary code components. Experienced software developers participated in knowledge elicitations to identify factors affecting trust in tools and methods used for understanding binary code behavior and minimizing potential security vulnerabilities. Developer perceptions of trust in those tools to assess implementation risk in binary components were captured across a variety of application contexts. The software developers reported source security and vulnerability reports provided the best insight and awareness of potential issues or shortcomings in binary code. Further, applications where the potential impact to systems and data loss is high require relying on more than one type of analysis to ensure the binary component is sound. The findings suggest binary analysis is viable for identifying issues and potential vulnerabilities as part of a comprehensive solution for understanding binary code behavior and security vulnerabilities, but relying simply on binary analysis tools and binary release metadata appears insufficient to ensure a secure solution.
Ng, M., Coopamootoo, K. P. L., Toreini, E., Aitken, M., Elliot, K., Moorsel, A. van.  2020.  Simulating the Effects of Social Presence on Trust, Privacy Concerns Usage Intentions in Automated Bots for Finance. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :190–199.
FinBots are chatbots built on automated decision technology, aimed to facilitate accessible banking and to support customers in making financial decisions. Chatbots are increasing in prevalence, sometimes even equipped to mimic human social rules, expectations and norms, decreasing the necessity for human-to-human interaction. As banks and financial advisory platforms move towards creating bots that enhance the current state of consumer trust and adoption rates, we investigated the effects of chatbot vignettes with and without socio-emotional features on intention to use the chatbot for financial support purposes. We conducted a between-subject online experiment with N = 410 participants. Participants in the control group were provided with a vignette describing a secure and reliable chatbot called XRO23, whereas participants in the experimental group were presented with a vignette describing a secure and reliable chatbot that is more human-like and named Emma. We found that Vignette Emma did not increase participants' trust levels nor lowered their privacy concerns even though it increased perception of social presence. However, we found that intention to use the presented chatbot for financial support was positively influenced by perceived humanness and trust in the bot. Participants were also more willing to share financially-sensitive information such as account number, sort code and payments information to XRO23 compared to Emma - revealing a preference for a technical and mechanical FinBot in information sharing. Overall, this research contributes to our understanding of the intention to use chatbots with different features as financial technology, in particular that socio-emotional support may not be favoured when designed independently of financial function.
Kfoury, E. F., Khoury, D., AlSabeh, A., Gomez, J., Crichigno, J., Bou-Harb, E..  2020.  A Blockchain-based Method for Decentralizing the ACME Protocol to Enhance Trust in PKI. 2020 43rd International Conference on Telecommunications and Signal Processing (TSP). :461–465.

Blockchain technology is the cornerstone of digital trust and systems' decentralization. The necessity of eliminating trust in computing systems has triggered researchers to investigate the applicability of Blockchain to decentralize the conventional security models. Specifically, researchers continuously aim at minimizing trust in the well-known Public Key Infrastructure (PKI) model which currently requires a trusted Certificate Authority (CA) to sign digital certificates. Recently, the Automated Certificate Management Environment (ACME) was standardized as a certificate issuance automation protocol. It minimizes the human interaction by enabling certificates to be automatically requested, verified, and installed on servers. ACME only solved the automation issue, but the trust concerns remain as a trusted CA is required. In this paper we propose decentralizing the ACME protocol by using the Blockchain technology to enhance the current trust issues of the existing PKI model and to eliminate the need for a trusted CA. The system was implemented and tested on Ethereum Blockchain, and the results showed that the system is feasible in terms of cost, speed, and applicability on a wide range of devices including Internet of Things (IoT) devices.

Han, W., Schulz, H.-J..  2020.  Beyond Trust Building — Calibrating Trust in Visual Analytics. 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). :9–15.
Trust is a fundamental factor in how users engage in interactions with Visual Analytics (VA) systems. While the importance of building trust to this end has been pointed out in research, the aspect that trust can also be misplaced is largely ignored in VA so far. This position paper addresses this aspect by putting trust calibration in focus – i.e., the process of aligning the user’s trust with the actual trustworthiness of the VA system. To this end, we present the trust continuum in the context of VA, dissect important trust issues in both VA systems and users, as well as discuss possible approaches that can build and calibrate trust.
Rutard, F., Sigaud, O., Chetouani, M..  2020.  TIRL: Enriching Actor-Critic RL with non-expert human teachers and a Trust Model. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :604–611.
Reinforcement learning (RL) algorithms have been demonstrated to be very attractive tools to train agents to achieve sequential tasks. However, these algorithms require too many training data to converge to be efficiently applied to physical robots. By using a human teacher, the learning process can be made faster and more robust, but the overall performance heavily depends on the quality and availability of teacher demonstrations or instructions. In particular, when these teaching signals are inadequate, the agent may fail to learn an optimal policy. In this paper, we introduce a trust-based interactive task learning approach. We propose an RL architecture able to learn both from environment rewards and from various sparse teaching signals provided by non-expert teachers, using an actor-critic agent, a human model and a trust model. We evaluate the performance of this architecture on 4 different setups using a maze environment with different simulated teachers and show that the benefits of the trust model.
Ajenaghughrure, I. B., Sousa, S. C. da Costa, Lamas, D..  2020.  Risk and Trust in artificial intelligence technologies: A case study of Autonomous Vehicles. 2020 13th International Conference on Human System Interaction (HSI). :118–123.
This study investigates how risk influences users' trust before and after interactions with technologies such as autonomous vehicles (AVs'). Also, the psychophysiological correlates of users' trust from users” eletrodermal activity responses. Eighteen (18) carefully selected participants embark on a hypothetical trip playing an autonomous vehicle driving game. In order to stay safe, throughout the drive experience under four risk conditions (very high risk, high risk, low risk and no risk) that are based on automotive safety and integrity levels (ASIL D, C, B, A), participants exhibit either high or low trust by evaluating the AVs' to be highly or less trustworthy and consequently relying on the Artificial intelligence or the joystick to control the vehicle. The result of the experiment shows that there is significant increase in users' trust and user's delegation of controls to AVs' as risk decreases and vice-versa. In addition, there was a significant difference between user's initial trust before and after interacting with AVs' under varying risk conditions. Finally, there was a significant correlation in users' psychophysiological responses (electrodermal activity) when exhibiting higher and lower trust levels towards AVs'. The implications of these results and future research opportunities are discussed.
Papadopoulos, A. V., Esterle, L..  2020.  Situational Trust in Self-aware Collaborating Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :91–94.
Trust among humans affects the way we interact with each other. In autonomous systems, this trust is often predefined and hard-coded before the systems are deployed. However, when systems encounter unfolding situations, requiring them to interact with others, a notion of trust will be inevitable. In this paper, we discuss trust as a fundamental measure to enable an autonomous system to decide whether or not to interact with another system, whether biological or artificial. These decisions become increasingly important when continuously integrating with others during runtime.
Hou, M..  2020.  IMPACT: A Trust Model for Human-Agent Teaming. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
A trust model IMPACT: Intention, Measurability, Predictability, Agility, Communication, and Transparency has been conceptualized to build human trust in autonomous agents. The six critical characteristics must be exhibited by the agents in order to gain and maintain the trust from their human partners towards an effective and collaborative team in achieving common goals. The IMPACT model guided a design of an intelligent adaptive decision aid for dynamic target engagement processes in a military context. Positive feedback from subject matter experts participated in a large scale joint exercise controlling multiple unmanned vehicles indicated the effectiveness of the decision aid. It also demonstrated the utility of the IMPACT model as design principles for building up a trusted human-agent teaming.
Wickramasinghe, C. S., Marino, D. L., Grandio, J., Manic, M..  2020.  Trustworthy AI Development Guidelines for Human System Interaction. 2020 13th International Conference on Human System Interaction (HSI). :130–136.
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
Gupta, K., Hajika, R., Pai, Y. S., Duenser, A., Lochner, M., Billinghurst, M..  2020.  Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). :756–765.
With the advancement of Artificial Intelligence technology to make smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. Through our research, we report on a novel methodology to investigate user's trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data such as electroencephalography (EEG), galvanic skin response (GSR), and heart-rate variability (HRV), subjective data through questionnaire such as System Trust Scale (STS), Subjective Mental Effort Questionnaire (SMEQ) and NASA-TLX. We also collected a behavioral measure of trust (congruency of users' head motion in response to valid/ invalid verbal advice from the agent). Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.
Lee, J., Abe, G., Sato, K., Itoh, M..  2020.  Impacts of System Transparency and System Failure on Driver Trust During Partially Automated Driving. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–3.
The objective of this study is to explore changes of trust by a situation where drivers need to intervene. Trust in automation is a key determinant for appropriate interaction between drivers and the system. System transparency and types of system failure influence shaping trust in a supervisory control. Subjective ratings of trust were collected to examine the impact of two factors: system transparency (Detailed vs. Less) and system failure (by Limits vs. Malfunction) in a driving simulator study in which drivers experienced a partially automated vehicle. We examined trust ratings at three points: before and after driver intervention in the automated vehicle, and after subsequent experience of flawless automated driving. Our result found that system transparency did not have significant impacts on trust change from before to after the intervention. System-malfunction led trust reduction compared to those of before the intervention, whilst system-limits did not influence trust. The subsequent experience recovered decreased trust, in addition, when the system-limit occurred to drivers who have detailed information about the system, trust prompted in spite of the intervention. The present finding has implications for automation design to achieve the appropriate level of trust.
2020-12-01
Harris, L., Grzes, M..  2019.  Comparing Explanations between Random Forests and Artificial Neural Networks. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2978—2985.

The decisions made by machines are increasingly comparable in predictive performance to those made by humans, but these decision making processes are often concealed as black boxes. Additional techniques are required to extract understanding, and one such category are explanation methods. This research compares the explanations of two popular forms of artificial intelligence; neural networks and random forests. Researchers in either field often have divided opinions on transparency, and comparing explanations may discover similar ground truths between models. Similarity can help to encourage trust in predictive accuracy alongside transparent structure and unite the respective research fields. This research explores a variety of simulated and real-world datasets that ensure fair applicability to both learning algorithms. A new heuristic explanation method that extends an existing technique is introduced, and our results show that this is somewhat similar to the other methods examined whilst also offering an alternative perspective towards least-important features.

Wang, S., Mei, Y., Park, J., Zhang, M..  2019.  A Two-Stage Genetic Programming Hyper-Heuristic for Uncertain Capacitated Arc Routing Problem. 2019 IEEE Symposium Series on Computational Intelligence (SSCI). :1606—1613.

Genetic Programming Hyper-heuristic (GPHH) has been successfully applied to automatically evolve effective routing policies to solve the complex Uncertain Capacitated Arc Routing Problem (UCARP). However, GPHH typically ignores the interpretability of the evolved routing policies. As a result, GP-evolved routing policies are often very complex and hard to be understood and trusted by human users. In this paper, we aim to improve the interpretability of the GP-evolved routing policies. To this end, we propose a new Multi-Objective GP (MOGP) to optimise the performance and size simultaneously. A major issue here is that the size is much easier to be optimised than the performance, and the search tends to be biased to the small but poor routing policies. To address this issue, we propose a simple yet effective Two-Stage GPHH (TS-GPHH). In the first stage, only the performance is to be optimised. Then, in the second stage, both objectives are considered (using our new MOGP). The experimental results showed that TS-GPHH could obtain much smaller and more interpretable routing policies than the state-of-the-art single-objective GPHH, without deteriorating the performance. Compared with traditional MOGP, TS-GPHH can obtain a much better and more widespread Pareto front.

Nikander, P., Autiosalo, J., Paavolainen, S..  2019.  Interledger for the Industrial Internet of Things. 2019 IEEE 17th International Conference on Industrial Informatics (INDIN). 1:908—915.

The upsurge of Industrial Internet of Things is forcing industrial information systems to enable less hierarchical information flow. The connections between humans, devices, and their digital twins are growing in numbers, creating a need for new kind of security and trust solutions. To address these needs, industries are applying distributed ledger technologies, aka blockchains. A significant number of use cases have been studied in the sectors of logistics, energy markets, smart grid security, and food safety, with frequently reported benefits in transparency, reduced costs, and disintermediation. However, distributed ledger technologies have challenges with transaction throughput, latency, and resource requirements, which render the technology unusable in many cases, particularly with constrained Internet of Things devices.To overcome these challenges within the Industrial Internet of Things, we suggest a set of interledger approaches that enable trusted information exchange across different ledgers and constrained devices. With these approaches, the technically most suitable ledger technology can be selected for each use case while simultaneously enjoying the benefits of the most widespread ledger implementations. We present state of the art for distributed ledger technologies to support the use of interledger approaches in industrial settings.