Biblio
In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.
When a robot breaks a person's trust by making a mistake or failing, continued interaction will depend heavily on how the robot repairs the trust that was broken. Prior work in psychology has demonstrated that both the trust violation framing and the trust repair strategy influence how effectively trust can be restored. We investigate trust repair between a human and a robot in the context of a competitive game, where a robot tries to restore a human's trust after a broken promise, using either a competence or integrity trust violation framing and either an apology or denial trust repair strategy. Results from a 2×2 between-subjects study ( n=82) show that participants interacting with a robot employing the integrity trust violation framing and the denial trust repair strategy are significantly more likely to exhibit behavioral retaliation toward the robot. In the Dyadic Trust Scale survey, an interaction between trust violation framing and trust repair strategy was observed. Our results demonstrate the importance of considering both trust violation framing and trust repair strategy choice when designing robots to repair trust. We also discuss the influence of human-to-robot promises and ethical considerations when framing and repairing trust between a human and robot.
Project Aquaticus is a human-robot teaming competition on the water involving autonomous surface vehicles and human operated motorized kayaks. Teams composed of both humans and robots share the same physical environment to play capture the flag. In this paper, we present results from seven competitions of our half-court (one participant versus one robot) game. We found that participants indicated more trust in more aggressive behaviors from robots.
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.
Crowd sensing is one of the core features of internet of vehicles, the use of internet of vehicles for crowd sensing is conducive to the rational allocation of sensing tasks. This paper mainly studies the problem of task allocation for crowd sensing in internet of vehicles, proposes a trajectory-based task allocation scheme for crowd sensing in internet of vehicles. With limited budget constraints, participants' trajectory is taken as an indicator of the spatiotemporal availability. Based on the solution idea of the minimal-cover problem, select the minimum number of participating vehicles to achieve the coverage of the target area.
In order to improve the accuracy of similarity, an improved collaborative filtering algorithm based on trust and information entropy is proposed in this paper. Firstly, the direct trust between the users is determined by the user's rating to explore the potential trust relationship of the users. The time decay function is introduced to realize the dynamic portrayal of the user's interest decays over time. Secondly, the direct trust and the indirect trust are combined to obtain the overall trust which is weighted with the Pearson similarity to obtain the trust similarity. Then, the information entropy theory is introduced to calculate the similarity based on weighted information entropy. At last, the trust similarity and the similarity based on weighted information entropy are weighted to obtain the similarity combing trust and information entropy which is used to predicted the rating of the target user and create the recommendation. The simulation shows that the improved algorithm has a higher accuracy of recommendation and can provide more accurate and reliable recommendation service.
In the past few years, visual information collection and transmission is increased significantly for various applications. Smart vehicles, service robotic platforms and surveillance cameras for the smart city applications are collecting a large amount of visual data. The preservation of the privacy of people presented in this data is an important factor in storage, processing, sharing and transmission of visual data across the Internet of Robotic Things (IoRT). In this paper, a novel anonymisation method for information security and privacy preservation in visual data in sharing layer of the Web of Robotic Things (WoRT) is proposed. The proposed framework uses deep neural network based semantic segmentation to preserve the privacy in video data base of the access level of the applications and users. The data is anonymised to the applications with lower level access but the applications with higher legal access level can analyze and annotated the complete data. The experimental results show that the proposed method while giving the required access to the authorities for legal applications of smart city surveillance, is capable of preserving the privacy of the people presented in the data.
In this paper we investigate deceptive defense strategies for web servers. Web servers are widely exploited resources in the modern cyber threat landscape. Often these servers are exposed in the Internet and accessible for a broad range of valid as well as malicious users. Common security strategies like firewalls are not sufficient to protect web servers. Deception based Information Security enables a large set of counter measures to decrease the efficiency of intrusions. In this work we depict several techniques out of the reconnaissance process of an attacker. We match these with deceptive counter measures. All proposed measures are implemented in an experimental web server with deceptive counter measure abilities. We also conducted an experiment with honeytokens and evaluated delay strategies against automated scanner tools.
Untethered microrobots actuated by external magnetic fields have drawn extensive attention recently, due to their potential advantages in real-time tracking and targeted delivery in vivo. To control a swarm of microrobots with external fields, however, is still one of the major challenges in this field. In this work, we present new methods to generate ribbon-like and vortex-like microrobotic swarms using oscillating and rotating magnetic fields, respectively. Paramagnetic nanoparticles with a diameter of 400 nm serve as the agents. These two types of swarms exhibits out-of-equilibrium structure, in which the nanoparticles perform synchronised motions. By tuning the magnetic fields, the swarming patterns can be reversibly transformed. Moreover, by increasing the pitch angle of the applied fields, the swarms are capable of performing navigated locomotion with a controlled velocity. This work sheds light on a better understanding for microrobotic swarm behaviours and paves the way for potential biomedical applications.
In autonomous driving, security issues from robotic and automotive applications are converging toward each other. A novel approach for deriving secret keys using a lightweight cipher in the firmware of low-end control units is introduced. By evaluating the method on a typical low-end automotive platform, we demonstrate the reusability of the cipher for message authentication. The proposed solution counteracts a known security issue in the robotics and automotive domain.
While the introduction of the softwarelization technologies such as SDN and NFV transfers main focus of network management from hardware to software, the network operators still have to care for a lot of network and computing equipment located in the network center. Toward fully automated network management, we believe that robotic approach will be significant, meaning that robot will care for the physical equipment on behalf of human. This paper explains our experience and insight gained throughout development of a network management robot. We utilize ROS(Robot Operating System) which is a powerful platform for robot development and secures the ease of development and expandability. Our roadmap of the network management robot is also shown as well as three use cases such as environmental monitoring, operator assistance and autonomous maintenance of the equipment. Finally, the paper briefly explains experimental results conducted in a commercial network center.
Cybersecurity in control systems has been actively discussed in recent years. In particular, networked control systems (NCSs) over the Internet are exposed to various types of cyberattacks such as false data injection attacks. This paper proposes a detection and mitigation method of the false data injection attacks in interactive NCSs, i.e., bilateral teleoperation systems. A bilateral teleoperation system exchanges position and force information through the Internet between the master and slave robots. The proposed method utilizes two redundant communication channels for both the master-to-slave and slave-to-master paths. The attacks are detected by a tamper detection observer (TDO) on each of the master and slave sides. The TDO compares the position responses of actual robots and robot models. A path selector on each side chooses the appropriate position and force responses from the responses received through the two communication channels, based on the outputs of the TDO. The proposed method is validated by simulations with attack models.
The field of robotics has matured using artificial intelligence and machine learning such that intelligent robots are being developed in the form of autonomous vehicles. The anticipated widespread use of intelligent robots and their potential to do harm has raised interest in their security. This research evaluates a cyberattack on the machine learning policy of an autonomous vehicle by designing and attacking a robotic vehicle operating in a dynamic environment. The primary contribution of this research is an initial assessment of effective manipulation through an indirect attack on a robotic vehicle using the Q learning algorithm for real-time routing control. Secondly, the research highlights the effectiveness of this attack along with relevant artifact issues.
The recently applied General Data Protection Regulation (GDPR) aims to protect all EU citizens from privacy and data breaches in an increasingly data-driven world. Consequently, this deeply affects the factory domain and its human-centric automation paradigm. Especially collaboration of human and machines as well as individual support are enabled and enhanced by processing audio and video data, e.g. by using algorithms which re-identify humans or analyse human behaviour. We introduce most significant impacts of the recent legal regulation change towards the automations domain at a glance. Furthermore, we introduce a representative scenario from production, deduce its legal affections from GDPR resulting in a privacy-aware software architecture. This architecture covers modern virtualization techniques along with authorization and end-to-end encryption to ensure a secure communication between distributes services and databases for distinct purposes.
As a new research hotspot in the field of artificial intelligence, deep reinforcement learning (DRL) has achieved certain success in various fields such as robot control, computer vision, natural language processing and so on. At the same time, the possibility of its application being attacked and whether it have a strong resistance to strike has also become a hot topic in recent years. Therefore, we select the representative Deep Q Network (DQN) algorithm in deep reinforcement learning, and use the robotic automatic pathfinding application as a countermeasure application scenario for the first time, and attack DQN algorithm against the vulnerability of the adversarial samples. In this paper, we first use DQN to find the optimal path, and analyze the rules of DQN pathfinding. Then, we propose a method that can effectively find vulnerable points towards White-Box Q table variation in DQN pathfinding training. Finally, we build a simulation environment as a basic experimental platform to test our method, through multiple experiments, we can successfully find the adversarial examples and the experimental results show that the supervised method we proposed is effective.
As robotic capabilities improve and robots become more capable as team members, a better understanding of effective human-robot teaming is needed. In this paper, we investigate failures by robots in various team configurations in space EVA operations. This paper describes the methodology of extending and the application of Work Models that Compute (WMC), a computational simulation framework, to model robot failures, interruptions, and the resolutions they require. Using these models, we investigate how different team configurations respond to a robot's failure to correctly complete the task and overall mission. We also identify key factors that impact the teamwork metrics for team designers to keep in mind while assembling teams and assigning taskwork to the agents. We highlight different metrics that these failures impact on team performance through varying components of teaming and interaction that occur. Finally, we discuss the future implications of this work and the future work to be done to investigate function allocation in human-robot teams.
Robots operating alongside humans in field environments have the potential to greatly increase the situational awareness of their human teammates. A significant challenge, however, is the efficient conveyance of what the robot perceives to the human in order to achieve improved situational awareness. We believe augmented reality (AR), which allows a human to simultaneously perceive the real world and digital information situated virtually in the real world, has the potential to address this issue. We propose to demonstrate that augmented reality can be used to enable human-robot cooperative search, where the robot can both share search results and assist the human teammate in navigating to a search target.
According to the new Tor network (6.0.5 version) can help the domestic users easily realize "over the wall", and of course criminals may use it to visit deep and dark website also. The paper analyzes the core technology of the new Tor network: the new flow obfuscation technology based on meek plug-in and real instance is used to verify the new Tor network's fast connectivity. On the basis of analyzing the traffic confusion mechanism and the network crime based on Tor, it puts forward some measures to prevent the using of Tor network to implement network crime.
Nowadays, robots are widely ubiquitous and integral part in our daily lives, which can be seen almost everywhere in industry, hospitals, military, etc. To provide remote access and control, usually robots are connected to local network or to the Internet through WiFi or Ethernet. As such, it is of great importance and of a critical mission to maintain the safety and the security access of such robots. Security threats may result in completely preventing the access and control of the robot. The consequences of this may be catastrophic and may cause an immediate physical damage to the robot. This paper aims to present a security risk assessment of the well-known PeopleBot; a mobile robot platform from Adept MobileRobots Company. Initially, we thoroughly examined security threats related to remote accessing the PeopleBot robot. We conducted an impact-oriented analysis approach on the wireless communication medium; the main method considered to remotely access the PeopleBot robot. Numerous experiments using SSH and server-client applications were conducted, and they demonstrated that certain attacks result in denying remote access service to the PeopleBot robot. Consequently and dangerously the robot becomes unavailable. Finally, we suggested one possible mitigation and provided useful conclusions to raise awareness of possible security threats on the robotic systems; especially when the robots are involved in critical missions or applications.