Biblio
In human-robot collaboration (HRC), human trust in the robot is the human expectation that a robot executes tasks with desired performance. A higher-level trust increases the willingness of a human operator to assign tasks, share plans, and reduce the interruption during robot executions, thereby facilitating human-robot integration both physically and mentally. However, due to real-world disturbances, robots inevitably make mistakes, decreasing human trust and further influencing collaboration. Trust is fragile and trust loss is triggered easily when robots show incapability of task executions, making the trust maintenance challenging. To maintain human trust, in this research, a trust repair framework is developed based on a human-to-robot attention transfer (H2R-AT) model and a user trust study. The rationale of this framework is that a prompt mistake correction restores human trust. With H2R-AT, a robot localizes human verbal concerns and makes prompt mistake corrections to avoid task failures in an early stage and to finally improve human trust. User trust study measures trust status before and after the behavior corrections to quantify the trust loss. Robot experiments were designed to cover four typical mistakes, wrong action, wrong region, wrong pose, and wrong spatial relation, validated the accuracy of H2R-AT in robot behavior corrections; a user trust study with 252 participants was conducted, and the changes in trust levels before and after corrections were evaluated. The effectiveness of the human trust repairing was evaluated by the mistake correction accuracy and the trust improvement.
A human-swarm cooperative system, which mixes multiple robots and a human supervisor to form a mission team, has been widely used for emergent scenarios such as criminal tracking and victim assistance. These scenarios are related to human safety and require a robot team to quickly transit from the current undergoing task into the new emergent task. This sudden mission change brings difficulty in robot motion adjustment and increases the risk of performance degradation of the swarm. Trust in human-human collaboration reflects a general expectation of the collaboration; based on the trust humans mutually adjust their behaviors for better teamwork. Inspired by this, in this research, a trust-aware reflective control (Trust-R), was developed for a robot swarm to understand the collaborative mission and calibrate its motions accordingly for better emergency response. Typical emergent tasks “transit between area inspection tasks”, “response to emergent target - car accident” in social security with eight fault-related situations were designed to simulate robot deployments. A human user study with 50 volunteers was conducted to model trust and assess swarm performance. Trust-R's effectiveness in supporting a robot team for emergency response was validated by improved task performance and increased trust scores.
Vehicle ad-hoc network (VANET) is the main driving force to alleviate traffic congestion and accelerate the construction of intelligent transportation. However, the rapid growth of the number of vehicles makes the construction of the safety system of the vehicle network facing multiple tests. This paper proposes an identity-based aggregate signature scheme to protect the privacy of vehicle identity, receive messages in time and authenticate quickly in VANET. The scheme uses aggregate signature algorithm to aggregate the signatures of multiple users into one signature, and joins the idea of batch authentication to complete the authentication of multiple vehicular units, thereby improving the verification efficiency. In addition, the pseudoidentity of vehicles is used to achieve the purpose of vehicle anonymity and privacy protection. Finally, the secure storage of message signatures is effectively realized by using reliable cloud storage technology. Compared with similar schemes, this paper improves authentication efficiency while ensuring security, and has lower storage overhead.
Due to the user-interface limitations of wearable devices, voice-based interfaces are becoming more common; speaker recognition may then address the authentication requirements of wearable applications. Wearable devices have small form factor, limited energy budget and limited computational capacity. In this paper, we examine the challenge of computing speaker recognition on small wearable platforms, and specifically, reducing resource use (energy use, response time) by trimming the input through careful feature selections. For our experiments, we analyze four different feature-selection algorithms and three different feature sets for speaker identification and speaker verification. Our results show that Principal Component Analysis (PCA) with frequency-domain features had the highest accuracy, Pearson Correlation (PC) with time-domain features had the lowest energy use, and recursive feature elimination (RFE) with frequency-domain features had the least latency. Our results can guide developers to choose feature sets and configurations for speaker-authentication algorithms on wearable platforms.