Biblio
In human-robot collaboration (HRC), human trust in the robot is the human expectation that a robot executes tasks with desired performance. A higher-level trust increases the willingness of a human operator to assign tasks, share plans, and reduce the interruption during robot executions, thereby facilitating human-robot integration both physically and mentally. However, due to real-world disturbances, robots inevitably make mistakes, decreasing human trust and further influencing collaboration. Trust is fragile and trust loss is triggered easily when robots show incapability of task executions, making the trust maintenance challenging. To maintain human trust, in this research, a trust repair framework is developed based on a human-to-robot attention transfer (H2R-AT) model and a user trust study. The rationale of this framework is that a prompt mistake correction restores human trust. With H2R-AT, a robot localizes human verbal concerns and makes prompt mistake corrections to avoid task failures in an early stage and to finally improve human trust. User trust study measures trust status before and after the behavior corrections to quantify the trust loss. Robot experiments were designed to cover four typical mistakes, wrong action, wrong region, wrong pose, and wrong spatial relation, validated the accuracy of H2R-AT in robot behavior corrections; a user trust study with 252 participants was conducted, and the changes in trust levels before and after corrections were evaluated. The effectiveness of the human trust repairing was evaluated by the mistake correction accuracy and the trust improvement.
A human-swarm cooperative system, which mixes multiple robots and a human supervisor to form a mission team, has been widely used for emergent scenarios such as criminal tracking and victim assistance. These scenarios are related to human safety and require a robot team to quickly transit from the current undergoing task into the new emergent task. This sudden mission change brings difficulty in robot motion adjustment and increases the risk of performance degradation of the swarm. Trust in human-human collaboration reflects a general expectation of the collaboration; based on the trust humans mutually adjust their behaviors for better teamwork. Inspired by this, in this research, a trust-aware reflective control (Trust-R), was developed for a robot swarm to understand the collaborative mission and calibrate its motions accordingly for better emergency response. Typical emergent tasks “transit between area inspection tasks”, “response to emergent target - car accident” in social security with eight fault-related situations were designed to simulate robot deployments. A human user study with 50 volunteers was conducted to model trust and assess swarm performance. Trust-R's effectiveness in supporting a robot team for emergency response was validated by improved task performance and increased trust scores.
Blockchain technology is the cornerstone of digital trust and systems' decentralization. The necessity of eliminating trust in computing systems has triggered researchers to investigate the applicability of Blockchain to decentralize the conventional security models. Specifically, researchers continuously aim at minimizing trust in the well-known Public Key Infrastructure (PKI) model which currently requires a trusted Certificate Authority (CA) to sign digital certificates. Recently, the Automated Certificate Management Environment (ACME) was standardized as a certificate issuance automation protocol. It minimizes the human interaction by enabling certificates to be automatically requested, verified, and installed on servers. ACME only solved the automation issue, but the trust concerns remain as a trusted CA is required. In this paper we propose decentralizing the ACME protocol by using the Blockchain technology to enhance the current trust issues of the existing PKI model and to eliminate the need for a trusted CA. The system was implemented and tested on Ethereum Blockchain, and the results showed that the system is feasible in terms of cost, speed, and applicability on a wide range of devices including Internet of Things (IoT) devices.
The decisions made by machines are increasingly comparable in predictive performance to those made by humans, but these decision making processes are often concealed as black boxes. Additional techniques are required to extract understanding, and one such category are explanation methods. This research compares the explanations of two popular forms of artificial intelligence; neural networks and random forests. Researchers in either field often have divided opinions on transparency, and comparing explanations may discover similar ground truths between models. Similarity can help to encourage trust in predictive accuracy alongside transparent structure and unite the respective research fields. This research explores a variety of simulated and real-world datasets that ensure fair applicability to both learning algorithms. A new heuristic explanation method that extends an existing technique is introduced, and our results show that this is somewhat similar to the other methods examined whilst also offering an alternative perspective towards least-important features.
Genetic Programming Hyper-heuristic (GPHH) has been successfully applied to automatically evolve effective routing policies to solve the complex Uncertain Capacitated Arc Routing Problem (UCARP). However, GPHH typically ignores the interpretability of the evolved routing policies. As a result, GP-evolved routing policies are often very complex and hard to be understood and trusted by human users. In this paper, we aim to improve the interpretability of the GP-evolved routing policies. To this end, we propose a new Multi-Objective GP (MOGP) to optimise the performance and size simultaneously. A major issue here is that the size is much easier to be optimised than the performance, and the search tends to be biased to the small but poor routing policies. To address this issue, we propose a simple yet effective Two-Stage GPHH (TS-GPHH). In the first stage, only the performance is to be optimised. Then, in the second stage, both objectives are considered (using our new MOGP). The experimental results showed that TS-GPHH could obtain much smaller and more interpretable routing policies than the state-of-the-art single-objective GPHH, without deteriorating the performance. Compared with traditional MOGP, TS-GPHH can obtain a much better and more widespread Pareto front.
The upsurge of Industrial Internet of Things is forcing industrial information systems to enable less hierarchical information flow. The connections between humans, devices, and their digital twins are growing in numbers, creating a need for new kind of security and trust solutions. To address these needs, industries are applying distributed ledger technologies, aka blockchains. A significant number of use cases have been studied in the sectors of logistics, energy markets, smart grid security, and food safety, with frequently reported benefits in transparency, reduced costs, and disintermediation. However, distributed ledger technologies have challenges with transaction throughput, latency, and resource requirements, which render the technology unusable in many cases, particularly with constrained Internet of Things devices.To overcome these challenges within the Industrial Internet of Things, we suggest a set of interledger approaches that enable trusted information exchange across different ledgers and constrained devices. With these approaches, the technically most suitable ledger technology can be selected for each use case while simultaneously enjoying the benefits of the most widespread ledger implementations. We present state of the art for distributed ledger technologies to support the use of interledger approaches in industrial settings.