Biblio
This paper revealed the development and implementation of the wearable sensors based on transient responses of textile chemical sensors for odorant detection system as wearable sensor of humanoid robot. The textile chemical sensors consist of nine polymer/CNTs nano-composite gas sensors which can be divided into three different prototypes of the wearable humanoid robot; (i) human axillary odor monitoring, (ii) human foot odor tracking, and (iii) wearable personal gas leakage detection. These prototypes can be integrated into high-performance wearable wellness platform such as smart clothes, smart shoes and wearable pocket toxic-gas detector. While operating mode has been designed to use ZigBee wireless communication technology for data acquisition and monitoring system. Wearable humanoid robot offers several platforms that can be applied to investigate the role of individual scent produced by different parts of the human body such as axillary odor and foot odor, which have potential health effects from abnormal or offensive body odor. Moreover, wearable personal safety and security component in robot is also effective for detecting NH3 leakage in environment. Preliminary results with nine textile chemical sensors for odor biomarker and NH3 detection demonstrates the feasibility of using the wearable humanoid robot to distinguish unpleasant odor released when you're physically active. It also showed an excellent performance to detect a hazardous gas like ammonia (NH3) with sensitivity as low as 5 ppm.
In recent years, humanoid robots have become quite ubiquitous finding wide applicability in many different fields, spanning from education to entertainment and assistance. They can be considered as more complex cyber-physical systems (CPS) and, as such, they are exposed to the same vulnerabilities. This can be very dangerous for people acting that close with these robots, since attackers by exploiting their vulnerabilities, can not only violate people's privacy, but, more importantly, they can command the robot behavior causing them bodily harm, thus leading to devastating consequences. In this paper, we propose a solution not yet investigated in this field, which relies on the use of secure enclaves, which in our opinion could represent a valuable solution for coping with most of the possible attacks, while suggesting developers to adopt such a precaution during the robot design phase.
Wireless networking opens up many opportunities to facilitate miniaturized robots in collaborative tasks, while the openness of wireless medium exposes robots to the threats of Sybil attackers, who can break the fundamental trust assumption in robotic collaboration by forging a large number of fictitious robots. Recent advances advocate the adoption of bulky multi-antenna systems to passively obtain fine-grained physical layer signatures, rendering them unaffordable to miniaturized robots. To overcome this conundrum, this paper presents ScatterID, a lightweight system that attaches featherlight and batteryless backscatter tags to single-antenna robots to defend against Sybil attacks. Instead of passively "observing" signatures, ScatterID actively "manipulates" multipath propagation by using backscatter tags to intentionally create rich multipath features obtainable to a single-antenna robot. These features are used to construct a distinct profile to detect the real signal source, even when the attacker is mobile and power-scaling. We implement ScatterID on the iRobot Create platform and evaluate it in typical indoor and outdoor environments. The experimental results show that our system achieves a high AUROC of 0.988 and an overall accuracy of 96.4% for identity verification.
To meet the high requirement of human-machine interaction, quadruped robots with human recognition and tracking capability are studied in this paper. We first introduce a marker recognition system which uses multi-thread laser scanner and retro-reflective markers to distinguish the robot's leader and other objects. When the robot follows leader autonomously, the variant A* algorithm which having obstacle grids extended virtually (EA*) is used to plan the path. But if robots need to track and follow the leader's path as closely as possible, it will trust that the path which leader have traveled is safe enough and uses the incremental form of EA* algorithm (IEA*) to reproduce the trajectory. The simulation and experiment results illustrate the feasibility and effectiveness of the proposed algorithms.
In medical human-robot interactions, trust plays an important role since for patients there may be more at stake than during other kinds of encounters with robots. In the current study, we address issues of trust in the interaction with a prototype of a therapeutic robot, the Universal RoboTrainer, in which the therapist records patient-specific tasks for the patient by means of kinesthetic guidance of the patients arm, which is connected to the robot. We carried out a user study with twelve pairs of participants who collaborate on recording a training program on the robot. We examine a) the degree with which participants identify the situation as uncomfortable or distressing, b) participants' own strategies to mitigate that stress, c) the degree to which the robot is held responsible for the problems occurring and the amount of agency ascribed to it, and d) when usability issues arise, what effect these have on participants' trust. We find signs of distress mostly in contexts with usability issues, as well as many verbal and kinesthetic mitigation strategies intuitively employed by the participants. Recommendations for robots to increase users' trust in kinesthetic interactions include the timely production of verbal cues that continuously confirm that everything is alright as well as increased contingency in the presentation of strategies for recovering from usability issues arising.
A recent study featuring a new kind of care robot indicated that participants expect a robot's ethical decision-making to be transparent to develop trust, even though the same type of `inspection of thoughts' isn't expected of a human carer. At first glance, this might suggest that robot transparency mechanisms are required for users to develop trust in robot-made ethical decisions. But the participants were found to desire transparency only when they didn't know the specifics of a human-robot social interaction. Humans trust others without observing their thoughts, which implies other means of determining trustworthiness. The study reported here suggests that the method is social interaction and observation, signifying that trust is a social construct. Moreover, that `social determinants of trust' are the transparent elements. This socially determined behaviour draws on notions of virtue ethics. If a caregiver (nurse or robot) consistently provides good, ethical care, then patients can trust that caregiver to do so often. The same social determinants may apply to care robots and thus it ought to be possible to trust them without the ability to see their thoughts. This study suggests why transparency mechanisms may not be effective in helping to develop trust in care robot ethical decision-making. It suggests that roboticists need to build sociable elements into care robots to help patients to develop patient trust in the care robot's ethical decision-making.
The growing diffusion of robotics in our daily life demands a deeper understanding of the mechanisms of trust in human-robot interaction. The performance of a robot is one of the most important factors influencing the trust of a human user. However, it is still unclear whether the circumstances in which a robot fails to affect the user's trust. We investigate how the perception of robot failures may influence the willingness of people to cooperate with the robot by following its instructions in a time-critical task. We conducted an experiment in which participants interacted with a robot that had previously failed in a related or an unrelated task. We hypothesized that users' observed and self-reported trust ratings would be higher in the condition where the robot has previously failed in an unrelated task. A proof-of-concept study with nine participants timidly confirms our hypothesis. At the same time, our results reveal some flaws in the design experimental, and encourage a future large scale study.
Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3-5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.
This project develops techniques to protect against sensor attacks on cyber-physical systems. Specifically, a resilient version of the Kalman filtering technique accompanied with a watermarking approach is proposed to detect cyber-attacks and estimate the correct state of the system. The defense techniques are used in conjunction and validated on two case studies: i) an unmanned ground vehicle (UGV) in which an attacker alters the reference angle and ii) a Cube Satellite (CubeSat) in which an attacker modifies the orientation of the satellite degrading its performance. Based on this work, we show that the proposed techniques in conjunction achieve better resiliency and defense capability than either technique alone against spoofing and replay attacks.
Various perceptual domains have underlying compositional semantics that are rarely captured in current models. We suspect this is because directly learning the compositional structure has evaded these models. Yet, the compositional structure of a given domain can be grounded in a separate domain thereby simplifying its learning. To that end, we propose a new approach to modeling bimodal perceptual domains that explicitly relates distinct projections across each modality and then jointly learns a bimodal sparse representation. The resulting model enables compositionality across these distinct projections and hence can generalize to unobserved percepts spanned by this compositional basis. For example, our model can be trained on red triangles and blue squares; yet, implicitly will also have learned red squares and blue triangles. The structure of the projections and hence the compositional basis is learned automatically; no assumption is made on the ordering of the compositional elements in either modality. Although our modeling paradigm is general, we explicitly focus on a tabletop building-blocks setting. To test our model, we have acquired a new bimodal dataset comprising images and spoken utterances of colored shapes (blocks) in the tabletop setting. Our experiments demonstrate the benefits of explicitly leveraging compositionality in both quantitative and human evaluation studies.
While the introduction of the softwarelization technologies such as SDN and NFV transfers main focus of network management from hardware to software, the network operators still have to care for a lot of network and computing equipment located in the network center. Toward fully automated network management, we believe that robotic approach will be significant, meaning that robot will care for the physical equipment on behalf of human. This paper explains our experience and insight gained throughout development of a network management robot. We utilize ROS(Robot Operating System) which is a powerful platform for robot development and secures the ease of development and expandability. Our roadmap of the network management robot is also shown as well as three use cases such as environmental monitoring, operator assistance and autonomous maintenance of the equipment. Finally, the paper briefly explains experimental results conducted in a commercial network center.
Robots are sophisticated form of IoT devices as they are smart devices that scrutinize sensor data from multiple sources and observe events to decide the best procedural actions to supervise and manoeuvre objects in the physical world. In this paper, localization of the robot is addressed by QR code Detection and path optimization is accomplished by Dijkstras algorithm. The robot can navigate automatically in its environment with sensors and shortest path is computed whenever heading measurements are updated with QR code landmark recognition. The proposed approach highly reduces computational burden and deployment complexity as it reflects the use of artificial intelligence to self-correct its course when required. An Encrypted communication channel is established over wireless local area network using SSHv2 protocol to transfer or receive sensor data(or commands) making it an IoT enabled Robot.
Robotics and the Internet of Things (IoT) are enveloping our society at an exponential rate due to lessening costs and better availability of hardware and software. Additionally, Cloud Robotics and Robot Operating System (ROS) can offset onboard processing power. However, strong and fundamental security practices have not been applied to fully protect these systems., partially negating the benefits of IoT. Researchers are therefore tasked with finding ways of securing communications and systems. Since security and convenience are oftentimes at odds, securing many heterogeneous components without compromising performance can be daunting. Protecting systems from attacks and ensuring that connections and instructions are from approved devices, all while maintaining the performance is imperative. This paper focuses on the development of security best practices and a mesh framework with an open-source, multipoint-to-multipoint virtual private network (VPN) that can tie Linux, Windows, IOS., and Android devices into one secure fabric, with heterogeneous mobile robotic platforms running ROSPY in a secure cloud robotics infrastructure.
In this paper, the problem of network connectivity is studied for an adversarial Internet of Battlefield Things (IoBT) system in which an attacker aims at disrupting the connectivity of the network by choosing to compromise one of the IoBT nodes at each time epoch. To counter such attacks, an IoBT defender attempts to reestablish the IoBT connectivity by either deploying new IoBT nodes or by changing the roles of existing nodes. This problem is formulated as a dynamic multistage Stackelberg connectivity game that extends classical connectivity games and that explicitly takes into account the characteristics and requirements of the IoBT network. In particular, the defender's payoff captures the IoBT latency as well as the sum of weights of disconnected nodes at each stage of the game. Due to the dependence of the attacker's and defender's actions at each stage of the game on the network state, the feedback Stackelberg solution [feedback Stackelberg equilibrium (FSE)] is used to solve the IoBT connectivity game. Then, sufficient conditions under which the IoBT system will remain connected, when the FSE solution is used, are determined analytically. Numerical results show that the expected number of disconnected sensors, when the FSE solution is used, decreases up to 46% compared to a baseline scenario in which a Stackelberg game with no feedback is used, and up to 43% compared to a baseline equal probability policy.
Nowadays, robots are widely ubiquitous and integral part in our daily lives, which can be seen almost everywhere in industry, hospitals, military, etc. To provide remote access and control, usually robots are connected to local network or to the Internet through WiFi or Ethernet. As such, it is of great importance and of a critical mission to maintain the safety and the security access of such robots. Security threats may result in completely preventing the access and control of the robot. The consequences of this may be catastrophic and may cause an immediate physical damage to the robot. This paper aims to present a security risk assessment of the well-known PeopleBot; a mobile robot platform from Adept MobileRobots Company. Initially, we thoroughly examined security threats related to remote accessing the PeopleBot robot. We conducted an impact-oriented analysis approach on the wireless communication medium; the main method considered to remotely access the PeopleBot robot. Numerous experiments using SSH and server-client applications were conducted, and they demonstrated that certain attacks result in denying remote access service to the PeopleBot robot. Consequently and dangerously the robot becomes unavailable. Finally, we suggested one possible mitigation and provided useful conclusions to raise awareness of possible security threats on the robotic systems; especially when the robots are involved in critical missions or applications.
In the multi-robot applications, the maintained and desired network may be destroyed by failed robots. The existing self-healing algorithms only handle with the case of single robot failure, however, multiple robot failures may cause several challenges, such as disconnected network and conflicts among repair paths. This paper presents a distributed self-healing algorithm based on 2-hop neighbor infomation to resolve the problems caused by multiple robot failures. Simulations and experiment show that the proposed algorithm manages to restore connectivity of the mobile robot network and improves the synchronization of the network globally, which validate the effectiveness of the proposed algorithm in resolving multiple robot failures.
Robotic vehicles and especially autonomous robotic vehicles can be attractive targets for attacks that cross the cyber-physical divide, that is cyber attacks or sensory channel attacks affecting the ability to navigate or complete a mission. Detection of such threats is typically limited to knowledge-based and vehicle-specific methods, which are applicable to only specific known attacks, or methods that require computation power that is prohibitive for resource-constrained vehicles. Here, we present a method based on Bayesian Networks that can not only tell whether an autonomous vehicle is under attack, but also whether the attack has originated from the cyber or the physical domain. We demonstrate the feasibility of the approach on an autonomous robotic vehicle built in accordance with the Generic Vehicle Architecture specification and equipped with a variety of popular communication and sensing technologies. The results of experiments involving command injection, rogue node and magnetic interference attacks show that the approach is promising.
This paper describes an experiment carried out to demonstrate robustness and trustworthiness of an orchestrated two-layer network test-bed (PROnet). A Robotic Operating System Industrial (ROS-I) distributed application makes use of end-to-end flow services offered by PROnet. The PROnet Orchestrator is used to provision reliable end-to-end Ethernet flows to support the ROS-I application required data exchange. For maximum reliability, the Orchestrator provisions network resource redundancy at both layers, i.e., Ethernet and optical. Experimental results show that the robotic application is not interrupted by a fiber outage.
Interconnected everyday objects, either via public or private networks, are gradually becoming reality in modern life - often referred to as the Internet of Things (IoT) or Cyber-Physical Systems (CPS). One stand-out example are those systems based on Unmanned Aerial Vehicles (UAVs). Fleets of such vehicles (drones) are prophesied to assume multiple roles from mundane to high-sensitive applications, such as prompt pizza or shopping deliveries to the home, or to deployment on battlefields for battlefield and combat missions. Drones, which we refer to as UAVs in this paper, can operate either individually (solo missions) or as part of a fleet (group missions), with and without constant connection with a base station. The base station acts as the command centre to manage the drones' activities; however, an independent, localised and effective fleet control is necessary, potentially based on swarm intelligence, for several reasons: 1) an increase in the number of drone fleets; 2) fleet size might reach tens of UAVs; 3) making time-critical decisions by such fleets in the wild; 4) potential communication congestion and latency; and 5) in some cases, working in challenging terrains that hinders or mandates limited communication with a control centre, e.g. operations spanning long period of times or military usage of fleets in enemy territory. This self-aware, mission-focused and independent fleet of drones may utilise swarm intelligence for a), air-traffic or flight control management, b) obstacle avoidance, c) self-preservation (while maintaining the mission criteria), d) autonomous collaboration with other fleets in the wild, and e) assuring the security, privacy and safety of physical (drones itself) and virtual (data, software) assets. In this paper, we investigate the challenges faced by fleet of drones and propose a potential course of action on how to overcome them.