Biblio
Given the complexities involved in the sensing, navigational and positioning environment on board automated vehicles we conduct an exploratory survey and identify factors capable of influencing the users' trust in such system. After the analysis of the survey data, the Situational Awareness of the Vehicle (SAV) emerges as an important factor capable of influencing the trust of the users. We follow up on that by conducting semi-structured interviews with 12 experts in the CAV field, focusing on the importance of the SAV, on the factors that are most important when talking about it as well as the need to keep the users informed regarding its status. We conclude that in the context of Connected and Automated Vehicles (CAVs), the importance of the SAV can now be expanded beyond its technical necessity of making vehicles function to a human factors area: calibrating users' trust.
Logic locking has been conceived as a promising proactive defense strategy against intellectual property (IP) piracy, counterfeiting, hardware Trojans, reverse engineering, and overbuilding attacks. Yet, various attacks that use a working chip as an oracle have been launched on logic locking to successfully retrieve its secret key, undermining the defense of all existing locking techniques. In this paper, we propose stripped-functionality logic locking (SFLL), which strips some of the functionality of the design and hides it in the form of a secret key(s), thereby rendering on-chip implementation functionally different from the original one. When loaded onto an on-chip memory, the secret keys restore the original functionality of the design. Through security-aware synthesis that creates a controllable mismatch between the reverse-engineered netlist and original design, SFLL provides a quantifiable and provable resilience trade-off between all known and anticipated attacks. We demonstrate the application of SFLL to large designs (textgreater100K gates) using a computer-aided design (CAD) framework that ensures attaining the desired security level at minimal implementation cost, 8%, 5%, and 0.5% for area, power, and delay, respectively. In addition to theoretical proofs and simulation confirmation of SFLL's security, we also report results from the silicon implementation of SFLL on an ARM Cortex-M0 microprocessor in 65nm technology.
Trust is an important facilitator for successful business relationships and an important technology adoption determinant. However, thus far trust has received little attention in the context of cloud computing, resulting in a lack of understanding of the dimensions of trust in cloud services and trust-building antecedents. Although the literature provides various conceptual models of trust for contexts related to cloud computing that may serve as a reference, in particular trust in IT outsourcing providers and trust in IT artifacts, idiosyncrasies of trust in cloud computing require a novel conceptual model of trust. First, a cloud service has a dual nature of being an IT artifact and a service provided by an organization. Second, cloud services are offered in impersonal cloud marketplaces and build upon a nested network of cloud services within the cloud ecosystem. In this article, we first analyze the concept of trust in cloud contexts. Next, we develop a conceptual model that describes trust in cloud services. The conceptual model incorporates the duality of trust in a cloud provider organization and trust in an IT artifact, as well as trust types for the impersonal environment and the cloud computing ecosystem. Using the conceptual model as a lens we then review 43 empirical studies on trust in IT outsourcing and trust in IT artifacts that were identified by a structured literature search. The resulting conceptual model provides a conceptual typology of constructs for trust in cloud services, defines trust-building antecedents, and develops 19 propositions describing the relationships between trust constructs and between trust constructs and trust-building antecedents. The conceptual model contributes to research by creating grounds for future theory-building on trust in cloud contexts, integrating two previously disjoint strands in the trust literature, and identifying knowledge gaps. Based on the conceptual model, we furthermore provide practical advice for managers from service providers, platform providers, customers, and institutional authorities.
Insider threats can cause immense damage to organizations of different types, including government, corporate, and non-profit organizations. Being an insider, however, does not necessarily equate to being a threat. Effectively identifying valid threats, and assessing the type of threat an insider presents, remain difficult challenges. In this work, we propose a novel breakdown of eight insider threat types, identified by using three insider traits: predictability, susceptibility, and awareness. In addition to presenting this framework for insider threat types, we implement a computational model to demonstrate the viability of our framework with synthetic scenarios devised after reviewing real world insider threat case studies. The results yield useful insights into how further investigation might proceed to reveal how best to gauge predictability, susceptibility, and awareness, and precisely how they relate to the eight insider types.
Cyber-physical system integrity requires both hardware and software security. Many of the cyber attacks are successful as they are designed to selectively target a specific hardware or software component in an embedded system and trigger its failure. Existing security measures also use attack vector models and isolate the malicious component as a counter-measure. Isolated security primitives do not provide the overall trust required in an embedded system. Trust enhancements are proposed to a hardware security platform, where the trust specifications are implemented in both software and hardware. This distribution of trust makes it difficult for a hardware-only or software-only attack to cripple the system. The proposed approach is applied to a smart grid application consisting of third-party soft IP cores, where an attack on this module can result in a blackout. System integrity is preserved in the event of an attack and the anomalous behavior of the IP core is recorded by a supervisory module. The IP core also provides a snapshot of its trust metric, which is logged for further diagnostics.
Notions like security, trust, and privacy are crucial in the digital environment and in the future, with the advent of technologies like the Internet of Things (IoT) and Cyber-Physical Systems (CPS), their importance is only going to increase. Trust has different definitions, some situations rely on real-world relationships between entities while others depend on robust technologies to gain trust after deployment. In this paper we focus on these robust technologies, their evolution in past decades and their scope in the near future. The evolution of robust trust technologies has involved diverse approaches, as a consequence trust is defined, understood and ascertained differently across heterogeneous domains and technologies. In this paper we look at digital trust technologies from the point of view of security and examine how they are making secure computing an attainable reality. The paper also revisits and analyses the Trusted Platform Module (TPM), Secure Elements (SE), Hypervisors and Virtualisation, Intel TXT, Trusted Execution Environments (TEE) like GlobalPlatform TEE, Intel SGX, along with Host Card Emulation, and Encrypted Execution Environment (E3). In our analysis we focus on these technologies and their application to the emerging domains of the IoT and CPS.
The collaborative nature of content development has given rise to the novel problem of multiple ownership in access control, such that a shared resource is administrated simultaneously by co-owners who may have conflicting privacy preferences and/or sharing needs. Prior work has focused on the design of unsupervised conflict resolution mechanisms. Driven by the need for human consent in organizational settings, this paper explores interactive policy negotiation, an approach complementary to that of prior work. Specifically, we propose an extension of Relationship-Based Access Control (ReBAC) to support multiple ownership, in which a policy negotiation protocol is in place for co-owners to come up with and give consent to an access control policy in a structured manner. During negotiation, the draft policy is assessed by formally defined availability criteria: to the second level of the polynomial hierarchy. We devised two algorithms for verifying policy satisfiability, both employing a modern SAT solver for solving subproblems. The performance is found to be adequate for mid-sized organizations.
Different wireless Peer-to-Peer (P2P) routing protocols rely on cooperative protocols of interaction among peers, yet, most of the surveyed provide little detail on how the peers can take into consideration the peers' reliability for improving routing efficiency in collaborative networks. Previous research has shown that in most of the trust and reputation evaluation schemes, the peers' rating behaviour can be improved to include the peers' attributes for understanding peers' reliability. This paper proposes a reliability based trust model for dynamic trust evaluation between the peers in P2P networks for collaborative routing. Since the peers' routing attributes vary dynamically, our proposed model must also accommodate the dynamic changes of peers' attributes and behaviour. We introduce peers' buffers as a scaling factor for peers' trust evaluation in the trust and reputation routing protocols. The comparison between reliability and non-reliability based trust models using simulation shows the improved performance of our proposed model in terms of delivery ratio and average message latency.
Globally distributed collaboration requires cooperation and trust among team members. Current research suggests that informal, non-work related communication plays a positive role in developing cooperation and trust. However, the way in which teams connect, i.e. via a social network, greatly influences cooperation and trust development. The study described in this paper employs agent-based modeling and simulation to investigate the cooperation and trust development with the presence of informal, non-work-related communication in networked teams. Leveraging game theory, we present a model of how an individual makes strategic decisions when interacting with her social network neighbors. The results of simulation on a pseudo scale-free network reveal the conditions under which informal communication has an impact, how different network degree distributions affect efficient trust and cooperation development, and how it is possible to "seed" trust and cooperation development amongst individuals in specific network positions. This study is the first to use agent-based modeling and simulation to examine the relationships between scale-free networks' topological features (degree distribution), cooperation and trust development, and informal communication.
Trust plays an important role in various user-facing systems and applications. It is particularly important in the context of decision support systems, where the system's output serves as one of the inputs for the users' decision making processes. In this work, we study the dynamics of explicit and implicit user trust in a simulated automated quality monitoring system, as a function of the system accuracy. We establish that users correctly perceive the accuracy of the system and adjust their trust accordingly.
In light of the prevalent trend towards dense HetNets, the conventional coupled user association, where mobile device uses the same base station (BS) for both uplink and downlink traffic, is being questioned and the alternative and more general downlink/uplink decoupling paradigm is emerging. We focus on designing an effective user association mechanism for HetNets with downlink/uplink decoupling, which has started to receive more attention. We use a combination of matching theory and stochastic geometry. We model the problem as a matching with contracts game by drawing an analogy with the hospital-doctor matching problem. In our model, we use stochastic geometry to derive a closed-form expression for matching utility function. Our model captures different objectives between users in the uplink/downlink directions and also from the perspective of BSs. Based on this game model, we present a matching algorithm for decoupled uplink/downlink user association that results in a stable allocation. Simulation results demonstrate that our approach provides close-to-optimal performance, and significant gains over alternative approaches for user association in the decoupled context as well as the traditional coupled user association; these gains are a result of the holistic nature of our approach that accounts for the additional cost associated with decoupling and inter-dependence between uplink and downlink associations. Our work is also the first in the wireless communications domain to employ matching with contracts approach.
Social recommendation takes advantage of the influence of social relationships in decision making and the ready availability of social data through social networking systems. Trust relationships in particular can be exploited in such systems for rating prediction and recommendation, which has been shown to have the potential for improving the quality of the recommender and alleviating the issue of data sparsity, cold start, and adversarial attacks. An appropriate trust inference mechanism is necessary in extending the knowledge base of trust opinions and tackling the issue of limited trust information due to connection sparsity of social networks. In this work, we offer a new solution to trust inference in social networks to provide a better knowledge base for trust-aware recommender systems. We propose using a semiring framework as a nonlinear way to combine trust evidences for inferring trust, where trust relationship is model as 2-D vector containing both trust and certainty information. The trust propagation and aggregation rules, as the building blocks of our trust inference scheme, are based upon the properties of trust relationships. In our approach, both trust and distrust (i.e., positive and negative trust) are considered, and opinion conflict resolution is supported. We evaluate the proposed approach on real-world datasets, and show that our trust inference framework has high accuracy, and is capable of handling trust relationship in large networks. The inferred trust relationships can enlarge the knowledge base for trust information and improve the quality of trust-aware recommendation.
Cloud has gained a wide acceptance across the globe. Despite wide acceptance and adoption of cloud computing, certain apprehensions and diffidence, related to safety and security of data still exists. The service provider needs to convince and demonstrate to the client, the confidentiality of data on the cloud. This can be broadly translated to issues related to the process of identifying, developing, maintaining and optimizing trust with clients regarding the services provided. Continuous demonstration, maintenance and optimization of trust of the agreed upon services affects the relationship with a client. The paper proposes a framework of integration of trust at the IAAS level in the cloud. It proposes a novel method of generation of trust index factor, considering the performance and the agility of the feedback received using fuzzy logic.
In this paper a model of secure wireless sensor network (WSN) was developed. This model is able to defend against most of known network attacks and don't significantly reduce the energy power of sensor nodes (SN). We propose clustering as a way of network organization, which allows reducing energy consumption. Network protection is based on the trust level calculation and the establishment of trusted relationships between trusted nodes. The primary purpose of the hierarchical trust management system (HTMS) is to protect the WSN from malicious actions of an attacker. The developed system should combine the properties of energy efficiency and reliability. To achieve this goal the following tasks are performed: detection of illegal actions of an intruder; blocking of malicious nodes; avoiding of malicious attacks; determining the authenticity of nodes; the establishment of trusted connections between authentic nodes; detection of defective nodes and the blocking of their work. The HTMS operation based on the use of Bayes' theorem and calculation of direct and centralized trust values.
With the emergence of the internet of things (IoT) and participatory sensing (PS) paradigms trustworthiness of remotely sensed data has become a vital research question. In this work, we present the design of a trusted sensor, which uses physically unclonable functions (PUFs) as anchor to ensure integrity, authenticity and non-repudiation guarantees on the sensed data. We propose trusted sensors for mobile devices to address the problem of potential manipulation of mobile sensors' readings by exploiting vulnerabilities of mobile device OS in participatory sensing for IoT applications. Preliminary results from our implementation of trusted visual sensor node show that the proposed security solution can be realized without consuming significant amount of resources of the sensor node.
The main goal of this work is to create a model of trust which can be considered as a reference for developing applications oriented on collaborative annotation. Such a model includes design parameters inferred from online communities operated on collaborative content. This study aims to create a static model, but it could be dynamic or more than one model depending on the context of an application. An analysis on Genius as a peer production community was done to understand user behaviors. This study characterizes user interactions based on the differentiation between Lightweight Peer Production (LWPP) and Heavyweight Peer Production (HWPP). It was found that more LWPP- interactions take place in the lower levels of this system. As the level in the role system increases, there will be more HWPP-interactions. This can be explained as LWPP-interacions are straightforward, while HWPP-interations demand more agility by the user. These provide more opportunities and therefore attract other users for further interactions.
With the emergence of the internet of things (IoT) and participatory sensing (PS) paradigms trustworthiness of remotely sensed data has become a vital research question. In this work, we present the design of a trusted sensor, which uses physically unclonable functions (PUFs) as anchor to ensure integrity, authenticity and non-repudiation guarantees on the sensed data. We propose trusted sensors for mobile devices to address the problem of potential manipulation of mobile sensors' readings by exploiting vulnerabilities of mobile device OS in participatory sensing for IoT applications. Preliminary results from our implementation of trusted visual sensor node show that the proposed security solution can be realized without consuming significant amount of resources of the sensor node.
Unease over data privacy will retard consumer acceptance of IoT deployments. The primary source of discomfort is a lack of user control over raw data that is streamed directly from sensors to the cloud. This is a direct consequence of the over-centralization of today's cloud-based IoT hub designs. We propose a solution that interposes a locally-controlled software component called a privacy mediator on every raw sensor stream. Each mediator is in the same administrative domain as the sensors whose data is being collected, and dynamically enforces the current privacy policies of the owners of the sensors or mobile users within the domain. This solution necessitates a logical point of presence for mediators within the administrative boundaries of each organization. Such points of presence are provided by cloudlets, which are small locally-administered data centers at the edge of the Internet that can support code mobility. The use of cloudlet-based mediators aligns well with natural personal and organizational boundaries of trust and responsibility.
Nowadays, both the amount of cyberattacks and their sophistication have considerably increased, and their prevention concerns many organizations. Cooperation by means of information sharing is a promising strategy to address this problem, but unfortunately it poses many challenges. Indeed, looking for a win-win environment is not straightforward and organizations are not properly motivated to share information. This work presents a model to analyse the benefits and drawbacks of information sharing among organizations that present a certain level of dependency. The proposed model applies functional dependency network analysis to emulate attacks propagation and game theory for information sharing management. We present a simulation framework implementing the model that allows for testing different sharing strategies under several network and attack settings. Experiments using simulated environments show how the proposed model provides insights on which conditions and scenarios are beneficial for information sharing.
Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.
This study was conducted to determine whether monitoring moderated the impact of trust on the project performance of 57 virtual teams. Two sources of monitoring were examined: internal monitoring done by team members and external monitoring done by someone outside of the team. Two types of trust were also examined: affective-based trust, or trust based on emotion; and cognitive trust, or trust based on competency. Results indicate that when internal monitoring was high, affective trust was associated with increases in performance. However, affective trust was associated with decreases in performance when external monitoring was high. Both types of monitoring reduced the strong positive relationship between cognitive trust and the performance of virtual teams. Results of this study provide new insights about monitoring and trust in virtual teams and inform both theory and design.
The rising prevalence of algorithmic interfaces, such as curated feeds in online news, raises new questions for designers, scholars, and critics of media. This work focuses on how transparent design of algorithmic interfaces can promote awareness and foster trust. A two-stage process of how transparency affects trust was hypothesized drawing on theories of information processing and procedural justice. In an online field experiment, three levels of system transparency were tested in the high-stakes context of peer assessment. Individuals whose expectations were violated (by receiving a lower grade than expected) trusted the system less, unless the grading algorithm was made more transparent through explanation. However, providing too much information eroded this trust. Attitudes of individuals whose expectations were met did not vary with transparency. Results are discussed in terms of a dual process model of attitude change and the depth of justification of perceived inconsistency. Designing for trust requires balanced interface transparency - not too little and not too much.
Reputation systems in current electronic marketplaces can easily be manipulated by malicious sellers in order to appear more reputable than appropriate. We conducted a controlled experiment with 40 UK and 41 German participants on their ability to detect malicious behavior by means of an eBay-like feedback profile versus a novel interface involving an interactive visualization of reputation data. The results show that participants using the new interface could better detect and understand malicious behavior in three out of four attacks (the overall detection accuracy 77% in the new vs. 56% in the old interface). Moreover, with the new interface, only 7% of the users decided to buy from the malicious seller (the options being to buy from one of the available sellers or to abstain from buying), as opposed to 30% in the old interface condition.
Recently, PIN-TRUST, a method to predict future trust relationships between users is proposed. PIN-TRUST out-performs existing trust prediction methods by exploiting all types of interactions between users and the reciprocation of ones. In this paper, we validate whether its consideration on the reciprocation of interactions is really effective in trust prediction. Furthermore, we consider a new concept, the "uncertainty" of untrustworthy users that is devised to reflect the difficulty on modeling the activities of untrustworthy users in PIN-TRUST. Then, we also validate the effectiveness this uncertainty concepts. Through the validation, we reveal that the consideration of the reciprocation of interactions is effective for trust prediction with PIN-TRUST, and it is necessary to regard the uncertainty of untrustworthy users same as that of other users.
Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.