Biblio
The growing diffusion of robotics in our daily life demands a deeper understanding of the mechanisms of trust in human-robot interaction. The performance of a robot is one of the most important factors influencing the trust of a human user. However, it is still unclear whether the circumstances in which a robot fails to affect the user's trust. We investigate how the perception of robot failures may influence the willingness of people to cooperate with the robot by following its instructions in a time-critical task. We conducted an experiment in which participants interacted with a robot that had previously failed in a related or an unrelated task. We hypothesized that users' observed and self-reported trust ratings would be higher in the condition where the robot has previously failed in an unrelated task. A proof-of-concept study with nine participants timidly confirms our hypothesis. At the same time, our results reveal some flaws in the design experimental, and encourage a future large scale study.
Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3-5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.
Recent years, more and more testing criteria for deep learning systems has been proposed to ensure system robustness and reliability. These criteria were defined based on different perspectives of diversity. However, there lacks comprehensive investigation on what are the most essential diversities that should be considered by a testing criteria for deep learning systems. Therefore, in this paper, we conduct an empirical study to investigate the relation between test diversities and erroneous behaviors of deep learning models. We define five metrics to reflect diversities in neuron activities, and leverage metamorphic testing to detect erroneous behaviors. We investigate the correlation between metrics and erroneous behaviors. We also go further step to measure the quality of test suites under the guidance of defined metrics. Our results provided comprehensive insights on the essential diversities for testing criteria to exhibit good fault detection ability.
Realizing the importance of the concept of “smart city” and its impact on the quality of life, many infrastructures, such as power plants, began their digital transformation process by leveraging modern computing and advanced communication technologies. Unfortunately, by increasing the number of connections, power plants become more and more vulnerable and also an attractive target for cyber-physical attacks. The analysis of interdependencies among system components reveals interdependent connections, and facilitates the identification of those among them that are in need of special protection. In this paper, we review the recent literature which utilizes graph-based models and network-based models to study these interdependencies. A comprehensive overview, based on the main features of the systems including communication direction, control parameters, research target, scalability, security and safety, is presented. We also assess the computational complexity associated with the approaches presented in the reviewed papers, and we use this metric to assess the scalability of the approaches.
The proposed combination of statistical methods has proved efficient for authorship attribution. The complex analysis method based on the proposed combination of statistical methods has made it possible to minimize the number of phoneme groups by which the authorial differentiation of texts has been done.
In Internet of Things (IoT) each object is addressable, trackable and accessible on the Internet. To be useful, objects in IoT co-operate and exchange information. IoT networks are open, anonymous, dynamic in nature so, a malicious object may enter into the network and disrupt the network. Trust models have been proposed to identify malicious objects and to improve the reliability of the network. Recommendations in trust computation are the basis of trust models. Due to this, trust models are vulnerable to bad mouthing and collusion attacks. In this paper, we propose a similarity model to mitigate badmouthing and collusion attacks and show that proposed method efficiently removes the impact of malicious recommendations in trust computation.
In many hostile military environments for instance war zone, unfriendly nature, etc., the systems perform on the specially promoted mode and nature which they tolerate the defined system network architecture. Preparation of Disruption-Tolerant systems (DTN) enhances the network between the remote devices which provided to the soldiers in the war zone, this situation conveys the reliable data transmission under scanner. Cipher text approach are based on the attribute based encryption which mainly acts on the attributes or role of the users, which is a successful cryptographic strategy to maintain the control issues and also allow reliable data transfer. Specially, the systems are not centralized and have more data constrained issues in the systems, implementing the Ciphertext-Policy Attribute-Based Encryption (CP-ABE) was an important issue, where this strategy provides the new security and data protection approach with the help of the Key Revocation, Key Escrows and collaboration of the certain attributes with help of main Key Authorities. This paper mainly concentrates on the reliable data retrieval system with the help of CP-ABE for the Disruption-Tolerant Networks where multiple key authorities deal with respective attributes safely and securely. We performed comparison analysis on existing schemes with the recommended system components which are configured in the respective decentralized tolerant military system for reliable data retrieval.
Loss of field (LOF) relay, with ANSI code 40, is one of the most important protection functions for synchronous generators in power plants. Although many LOF protection schemes have been presented in the literature during the last decades, a few numbers of them such as impedance and admittance based schemes are accepted by the industry. This paper explores and compares the performances of some industrial LOF protection schemes through simulation studies and from speed, reliability and security viewpoints. The simulation studies are carried out in the real-time-digital-simulator, where a realistic power generation unit is developed by employing the phase domain model of synchronous generator. Using such a realistic system, various types of LOF events can be simulated in accordance with IEEE Standard C37.102-2006, so that the performance of any method can be evaluated through careful LOF studies.
In this paper, we present a chaos-based information rotated polar coding scheme for enhancing the reliability and security of visible light communication (VLC) systems. In our scheme, we rotate the original information, wherein the rotation principle is determined by two chaotic sequences. Then the rotated information is encoded by secure polar coding scheme. After the channel polarization achieved by the polar coding, we could identify the bit-channels providing good transmission conditions for legitimate users and the bit-channels with bad conditions for eavesdroppers. Simulations are performed over the visible light wiretap channel. The results demonstrate that compared with existing schemes, the proposed scheme can achieve better reliability and security even when the eavesdroppers have better channel conditions.
Vehicular Ad-hoc Networks (VANETs) play an essential role in ensuring safe, reliable and faster transportation with the help of an Intelligent Transportation system. The trustworthiness of vehicles in VANETs is extremely important to ensure the authenticity of messages and traffic information transmitted in extremely dynamic topographical conditions where vehicles move at high speed. False or misleading information may cause substantial traffic congestions, road accidents and may even cost lives. Many approaches exist in literature to measure the trustworthiness of GPS data and messages of an Autonomous Vehicle (AV). To the best of our knowledge, they have not considered the trustworthiness of other On-Board Unit (OBU) components of an AV, along with GPS data and transmitted messages, though they have a substantial relevance in overall vehicle trust measurement. In this paper, we introduce a novel model to measure the overall trustworthiness of an AV considering four different OBU components additionally. The performance of the proposed method is evaluated with a traffic simulation model developed by Simulation of Urban Mobility (SUMO) using realistic traffic data and considering different levels of uncertainty.
Cyber-physical systems (CPS) are state-of-the-art communication environments that offer various applications with distinct requirements. However, security in CPS is a nonnegotiable concept, since without a proper security mechanism the applications of CPS may risk human lives, the privacy of individuals, and system operations. In this paper, we focus on PHY-layer security approaches in CPS to prevent passive eavesdropping attacks, and we propose an integration of physical layer operations to enhance security. Thanks to the McEliece cryptosystem, error injection is firstly applied to information bits, which are encoded with the forward error correction (FEC) schemes. Golay and Hamming codes are selected as FEC schemes to satisfy power and computational efficiency. Then obtained codewords are transmitted across reliable intermediate relays to the legitimate receiver. As a performance metric, the decoding frame error rate of the eavesdropper is analytically obtained for the fragmentary existence of significant noise between relays and Eve. The simulation results validate the analytical calculations, and the obtained results show that the number of low-quality channels and the selected FEC scheme affects the performance of the proposed model.