Biblio
The emergence of Cyber-Physical Systems (CPSs) is a potential paradigm shift for the usage of Information and Communication Technologies (ICT). From predominantly a facilitator of information and communication services, the role of ICT in the present age has expanded to the management of objects and resources in the physical world. Thus, it is imperative to devise mechanisms to ensure the trustworthiness of data to secure vulnerable devices against security threats. This work presents an analytical framework based on non-cooperative game theory to evaluate the trustworthiness of individual sensor nodes that constitute the CPS. The proposed game-theoretic model captures the factors impacting the trustworthiness of CPS sensor nodes. Further, the model is used to estimate the Nash equilibrium solution of the game, to derive a trust threshold criterion. The trust threshold represents the minimum trust score required to be maintained by individual sensor nodes during CPS operation. Sensor nodes with trust scores below the threshold are potentially malicious and may be removed or isolated to ensure the secure operation of CPS.
Trust prediction in online social networks is crucial for information dissemination, product promotion, and decision making. Existing work on trust prediction mainly utilizes the network structure or the low-rank approximation of a trust network. These approaches can suffer from the problem of data sparsity and prediction accuracy. Inspired by the homophily theory, which shows a pervasive feature of social and economic networks that trust relations tend to be developed among similar people, we propose a novel deep user model for trust prediction based on user similarity measurement. It is a comprehensive data sparsity insensitive model that combines a user review behavior and the item characteristics that this user is interested in. With this user model, we firstly generate a user's latent features mined from user review behavior and the item properties that the user cares. Then we develop a pair-wise deep neural network to further learn and represent these user features. Finally, we measure the trust relations between a pair of people by calculating the user feature vector cosine similarity. Extensive experiments are conducted on two real-world datasets, which demonstrate the superior performance of the proposed approach over the representative baseline works.
Trusted collaboration satisfying the requirements of (a) adequate transparency and (b) preservation of privacy of business sensitive information is a key factor to ensure the success and adoption of online business-to-business (B2B) collaboration platforms. Our work proposes novel ways of stringing together game theoretic modeling, blockchain technology, and cryptographic techniques to build such a platform for B2B collaboration involving enterprise buyers and sellers who may be strategic. The B2B platform builds upon three ideas. The first is to use a permissioned blockchain with smart contracts as the technical infrastructure for building the platform. Second, the above smart contracts implement deep business logic which is derived using a rigorous analysis of a repeated game model of the strategic interactions between buyers and sellers to devise strategies to induce honest behavior from buyers and sellers. Third, we present a formal framework that captures the essential requirements for secure and private B2B collaboration, and, in this direction, we develop cryptographic regulation protocols that, in conjunction with the blockchain, help implement such a framework. We believe our work is an important first step in the direction of building a platform that enables B2B collaboration among strategic and competitive agents while maximizing social welfare and addressing the privacy concerns of the agents.
Trust is known to be a key component in human social relationships. It is trust that defines human behavior with others to a large extent. Generative models have been extensively used in social networks study to simulate different characteristics and phenomena in social graphs. In this work, an attempt is made to understand how trust in social graphs can be combined with generative modeling techniques to generate trust-based social graphs. These generated social graphs are then compared with the original social graphs to evaluate how trust helps in generative modeling. Two well-known social network data sets i.e. the soc-Bitcoin and the wiki administrator network data sets are used in this work. Social graphs are generated from these data sets and then compared with the original graphs along with other standard generative modeling techniques to see how trust is a good component in this. Other Generative modeling techniques have been available for a while but this investigation with the real social graph data sets validate that trust can be an important factor in generative modeling.
Nowadays, Microblog has become an important online social networking platform, and a large number of users share information through Microblog. Many malicious users have released various false news driven by various interests, which seriously affects the availability of Microblog platform. Therefore, the evaluation of Microblog user credibility has become an important research issue. This paper proposes a microblog user credibility evaluation algorithm based on trust propagation. In view of the high consumption and low precision caused by malicious users' attacking algorithms and manual selection of seed sets by establishing false social relationships, this paper proposes two optimization strategies: pruning algorithm based on social activity and similarity and based on The seed node selection algorithm of clustering. The pruning algorithm can trim off the attack edges established by malicious users and normal users. The seed node selection algorithm can efficiently select the highly available seed node set, and finally use the user social relationship graph to perform the two-way propagation trust scoring, so that the low trusted user has a lower trusted score and thus identifies the malicious user. The related experiments verify the effectiveness of the trustworthiness-based user credibility evaluation algorithm in the evaluation of Microblog user credibility.
We propose a distributed machine-learning architecture to predict trustworthiness of sensor services in Mobile Edge Computing (MEC) based Internet of Things (IoT) services, which aligns well with the goals of MEC and requirements of modern IoT systems. The proposed machine-learning architecture models training a distributed trust prediction model over a topology of MEC-environments as a Network Lasso problem, which allows simultaneous clustering and optimization on large-scale networked-graphs. We then attempt to solve it using Alternate Direction Method of Multipliers (ADMM) in a way that makes it suitable for MEC-based IoT systems. We present analytical and simulation results to show the validity and efficiency of the proposed solution.
Verifying complex Cyber-Physical Systems (CPS) is increasingly important given the push to deploy safety-critical autonomous features. Unfortunately, traditional verification methods do not scale to the complexity of these systems and do not provide systematic methods to protect verified properties when not all the components can be verified. To address these challenges, this paper proposes a real-time mixed-trust computing framework that combines verification and protection. The framework introduces a new task model, where an application task can have both an untrusted and a trusted part. The untrusted part allows complex computations supported by a full OS with a realtime scheduler running in a VM hosted by a trusted hypervisor. The trusted part is executed by another scheduler within the hypervisor and is thus protected from the untrusted part. If the untrusted part fails to finish by a specific time, the trusted part is activated to preserve safety (e.g., prevent a crash) including its timing guarantees. This framework is the first allowing the use of untrusted components for CPS critical functions while preserving logical and timing guarantees, even in the presence of malicious attackers. We present the framework design and implementation along with the schedulability analysis and the coordination protocol between the trusted and untrusted parts. We also present our Raspberry Pi 3 implementation along with experiments showing the behavior of the system under failures of untrusted components, and a drone application to demonstrate its practicality.
Malware is pervasive and poses serious threats to normal operation of business processes in cloud. Cloud computing environments typically have hundreds of hosts that are connected to each other, often with high risk trust assumptions and/or protection mechanisms that are not difficult to break. Malware often exploits such weaknesses, as its immediate goal is often to spread itself to as many hosts as possible. Detecting this propagation is often difficult to address because the malware may reside in multiple components across the software or hardware stack. In this scenario, it is usually best to contain the malware to the smallest possible number of hosts, and it's also critical for system administration to resolve the issue in a timely manner. Furthermore, resolution often requires that several participants across different organizational teams scramble together to address the intrusion. In this vision paper, we define this problem in detail. We then present our vision of decentralized malware containment and the challenges and issues associated with this vision. The approach of containment involves detection and response using graph analytics coupled with a blockchain framework. We propose the use of a dominance frontier for profile nodes which must be involved in the containment process. Smart contracts are used to obtain consensus amongst the involved parties. The paper presents a basic implementation of this proposal. We have further discussed some open problems related to our vision.
Blockchains - with their inherent properties of transaction transparency, distributed consensus, immutability and cryptographic verifiability - are increasingly seen as a means to underpin innovative products and services in a range of sectors from finance through to energy and healthcare. Discussions, too often, make assertions that the trustless nature of blockchain technologies enables and actively promotes their suitability - there being no need to trust third parties or centralised control. Yet humans need to be able to trust systems, and others with whom the system enables transactions. In this paper, we highlight that understanding this need for trust is critical for the development of blockchain-based systems. Through an online study with 125 users of the most well-known of blockchain based systems - the cryptocurrency Bitcoin - we uncover that human and institutional aspects of trust are pervasive. Our analysis highlights that, when designing future blockchain-based technologies, we ought to not only consider computational trust but also the wider eco-system, how trust plays a part in users engaging/disengaging with such eco-systems and where design choices impact upon trust. From this, we distill a set of guidelines for software engineers developing blockchain-based systems for societal applications.
Most traditional recommendation algorithms only consider the binary relationship between users and projects, these can basically be converted into score prediction problems. But most of these algorithms ignore the users's interests, potential work factors or the other social factors of the recommending products. In this paper, based on the existing trustworthyness model and similarity measure, we puts forward the concept of trust similarity and design a joint interest-content recommendation framework to suggest users which videos to watch in the online video site. In this framework, we first analyze the user's viewing history records, tags and establish the user's interest characteristic vector. Then, based on the updated vector, users should be clustered by sparse subspace clust algorithm, which can improve the efficiency of the algorithm. We certainly improve the calculation of similarity to help users find better neighbors. Finally we conduct experiments using real traces from Tencent Weibo and Youku to verify our method and evaluate its performance. The results demonstrate the effectiveness of our approach and show that our approach can substantially improve the recommendation accuracy.
Throughout the last few decades, a breakthrough took place in the field of autonomous robotics. They have been introduced to perform dangerous, dirty, difficult, and dull tasks, to serve the community. They have been also used to address health-care related tasks, such as enhancing the surgical skills of the surgeons and enabling surgeries in remote areas. This may help to perform operations in remote areas efficiently and in timely manner, with or without human intervention. One of the main advantages is that robots are not affected with human-related problems such as: fatigue or momentary lapses of attention. Thus, they can perform repeated and tedious operations. In this paper, we propose a framework to establish trust in autonomous medical robots based on mutual understanding and transparency in decision making.
The presence of robots is becoming more apparent as technology progresses and the market focus transitions from smart phones to robotic personal assistants such as those provided by Amazon and Google. The integration of robots in our societies is an inevitable tendency in which robots in many forms and with many functionalities will provide services to humans. This calls for an understanding of how humans are affected by both the presence of and the reliance on robots to perform services for them. In this paper we explore the effects that robots have on humans when a service is performed on request. We expose three groups of human participants to three levels of service completion performed by robots. We record and analyse human perceptions such as propensity to trust, competency, responsiveness, sociability, and team work ability. Our results demonstrate that humans tend to trust robots and are more willing to interact with them when they autonomously recover from failure by requesting help from other robots to fulfil their service. This supports the view that autonomy and team working capabilities must be brought into robots in an effort to strengthen trust in robots performing a service.
When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.
In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.
Trust is an important topic in medical human-robot interaction, since patients may be more fragile than other groups of people. This paper investigates the issue of users' trust when interacting with a rehabilitation robot. In the study, we investigate participants' heart rate and perception of safety in a scenario when their arm is led by the rehabilitation robot in two types of exercises at three different velocities. The participants' heart rate are measured during each exercise and the participants are asked how safe they feel after each exercise. The results showed that velocity and type of exercise has no significant influence on the participants' heart rate, but they do have significant influence on how safe they feel. We found that increasing velocity and longer exercises negatively influence participants' perception of safety.
The growing diffusion of robotics in our daily life demands a deeper understanding of the mechanisms of trust in human-robot interaction. The performance of a robot is one of the most important factors influencing the trust of a human user. However, it is still unclear whether the circumstances in which a robot fails to affect the user's trust. We investigate how the perception of robot failures may influence the willingness of people to cooperate with the robot by following its instructions in a time-critical task. We conducted an experiment in which participants interacted with a robot that had previously failed in a related or an unrelated task. We hypothesized that users' observed and self-reported trust ratings would be higher in the condition where the robot has previously failed in an unrelated task. A proof-of-concept study with nine participants timidly confirms our hypothesis. At the same time, our results reveal some flaws in the design experimental, and encourage a future large scale study.
When a robot breaks a person's trust by making a mistake or failing, continued interaction will depend heavily on how the robot repairs the trust that was broken. Prior work in psychology has demonstrated that both the trust violation framing and the trust repair strategy influence how effectively trust can be restored. We investigate trust repair between a human and a robot in the context of a competitive game, where a robot tries to restore a human's trust after a broken promise, using either a competence or integrity trust violation framing and either an apology or denial trust repair strategy. Results from a 2×2 between-subjects study ( n=82) show that participants interacting with a robot employing the integrity trust violation framing and the denial trust repair strategy are significantly more likely to exhibit behavioral retaliation toward the robot. In the Dyadic Trust Scale survey, an interaction between trust violation framing and trust repair strategy was observed. Our results demonstrate the importance of considering both trust violation framing and trust repair strategy choice when designing robots to repair trust. We also discuss the influence of human-to-robot promises and ethical considerations when framing and repairing trust between a human and robot.
Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3-5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.