Biblio
Decision making in utilities, municipal, and energy companies depends on accurate and trustworthy weather information and predictions. Recently, crowdsourced personal weather stations (PWS) are being increasingly used to provide a higher spatial and temporal resolution of weather measurements. However, tools and methods to ensure the trustworthiness of the crowdsourced data in real-time are lacking. In this paper, we present a Reputation System for Crowdsourced Rainfall Networks (RSCRN) to assign trust scores to personal weather stations in a region. Using real PWS data from the Weather Underground service in the high flood risk region of Norfolk, Virginia, we evaluate the performance of the proposed RSCRN. The proposed method is able to converge to a confident trust score for a PWS within 10–20 observations after installation. Collectively, the results indicate that the trust score derived from the RSCRN can reflect the collective measure of trustworthiness to the PWS, ensuring both useful and trustworthy data for modeling and decision-making in the future.
With the vision of building "A Smart World", Internet of Things (IoT) plays a crucial role where users, computing systems and objects with sensing and actuating capabilities cooperate with unparalleled convenience. Among many applications of IoT, healthcare is the most emerging in today's scenario, as new technological advancement creates opportunity for early detection of illnesses, quick decision generation and even aftercare monitoring. Nowadays, it has become a reality for many patients to be monitored remotely, overcoming traditional logistical obstacles. However, these e-health applications increase the concerns of security, privacy, and integrity of medical data. For secured transmission in IoT healthcare, data that has been gathered from sensors in a patient's body area network needs to be sent to the end user and might need to be aggregated, visualized and/or evaluated before being presented. Here, trust is critical. Therefore, an end-to-end trustworthy system architecture can guarantee the reliable transmission of a patient's data and confirms the success of IoT Healthcare application.
This paper presents the development and configuration of a virtually air-gapped cloud environment in AWS, to secure the production software workloads and patient data (ePHI) and to achieve HIPAA compliance.
In an Internet of Things (IOT) network, each node (device) provides and requires services and with the growth in IOT, the number of nodes providing the same service have also increased, thus creating a problem of selecting one reliable service from among many providers. In this paper, we propose a scalable graph-based collaborative filtering recommendation algorithm, improved using trust to solve service selection problem, which can scale to match the growth in IOT unlike a central recommender which fails. Using this recommender, a node can predict its ratings for the nodes that are providing the required service and then select the best rated service provider.
Policies govern choices in the behavior of systems. They are applied to human behavior as well as to the behavior of autonomous systems but are defined differently in each case. Generally humans have the ability to interpret the intent behind the policies, to bring about their desired effects, even occasionally violating them when the need arises. In contrast, policies for automated systems fully define the prescribed behavior without ambiguity, conflicts or omissions. The increasing use of AI techniques and machine learning in autonomous systems such as drones promises to blur these boundaries and allows us to conceive in a similar way more flexible policies for the spectrum of human-autonomous systems collaborations. In coalition environments this spectrum extends across the boundaries of authority in pursuit of a common coalition goal and covers collaborations between human and autonomous systems alike. In social sciences, social exchange theory has been applied successfully to explain human behavior in a variety of contexts. It provides a framework linking the expected rewards, costs, satisfaction and commitment to explain and anticipate the choices that individuals make when confronted with various options. We discuss here how it can be used within coalition environments to explain joint decision making and to help formulate policies re-framing the concepts where appropriate. Social exchange theory is particularly attractive within this context as it provides a theory with “measurable” components that can be readily integrated in machine reasoning processes.
In the open network environment, the strange entities can establish the mutual trust through Automated Trust Negotiation (ATN) that is based on exchanging digital credentials. In traditional ATN, the attribute certificate required to either satisfied or not, and in the strategy, the importance of the certificate is same, it may cause some unnecessary negotiation failure. And in the actual situation, the properties is not just 0 or 1, it is likely to between 0 and 1, so the satisfaction degree is different, and the negotiation strategy need to be quantified. This paper analyzes the fuzzy negotiation process, in order to improve the trust establishment in high efficiency and accuracy further.
Recent studies have shown that adding explicit social trust information to social recommendation significantly improves the prediction accuracy of ratings, but it is difficult to obtain a clear trust data among users in real life. Scholars have studied and proposed some trust measure methods to calculate and predict the interaction and trust between users. In this article, a method of social trust relationship extraction based on hellinger distance is proposed, and user similarity is calculated by describing the f-divergence of one side node in user-item bipartite networks. Then, a new matrix factorization model based on implicit social relationship is proposed by adding the extracted implicit social relations into the improved matrix factorization. The experimental results support that the effect of using implicit social trust to recommend is almost the same as that of using actual explicit user trust ratings, and when the explicit trust data cannot be extracted, our method has a better effect than the other traditional algorithms.
Wireless sensor networks have achieved the substantial research interest in the present time because of their unique features such as fault tolerance, autonomous operation etc. The coverage maximization while considering the resource scarcity is a crucial problem in the wireless sensor networks. The approaches which address these problems and maximize the network lifetime are considered prominent. The node scheduling is such mechanism to address this issue. The scheduling strategy which addresses the target coverage problem based on coverage probability and trust values is proposed in Energy Efficient Coverage Protocol (EECP). In this paper the optimized decision rules is obtained by using the rough set theory to determine the number of active nodes. The results show that the proposed extension results in the lesser number of decision rules to consider in determination of node states in the network, hence it improves the network efficiency by reducing the number of packets transmitted and reducing the overhead.
Current technologies to include cloud computing, social networking, mobile applications and crowd and synthetic intelligence, coupled with the explosion in storage and processing power, are evolving massive-scale marketplaces for a wide variety of resources and services. They are also enabling unprecedented forms and levels of collaborations among human and machine entities. In this new era, trust remains the keystone of success in any relationship between two or more parties. A primary challenge is to establish and manage trust in environments where massive numbers of consumers, providers and brokers are largely autonomous with vastly diverse requirements, capabilities, and trust profiles. Most contemporary trust management solutions are oblivious to diversities in trustors' requirements and contexts, utilize direct or indirect experiences as the only form of trust computations, employ hardcoded trust computations and marginally consider collaboration in trust management. We surmise the need for reference architecture for trust management to guide the development of a wide spectrum of trust management systems. In our previous work, we presented a preliminary reference architecture for trust management which provides customizable and reconfigurable trust management operations to accommodate varying levels of diversity and trust personalization. In this paper, we present a comprehensive taxonomy for trust management and extend our reference architecture to feature collaboration as a first-class object. Our goal is to promote the development of new collaborative trust management systems, where various trust management operations would involve collaborating entities. Using the proposed architecture, we implemented a collaborative personalized trust management system. Simulation results demonstrate the effectiveness and efficiency of our system.
Trust management issue in cloud domain has been a persistent research topic discussed among scholars. Similar issue is bound to occur in the surfacing fog domain. Although fog and cloud are relatively similar, evaluating trust in fog domain is more challenging than in cloud. Fog's high mobility support, distributive nature, and closer distance to end user means that they are likely to operate in vulnerable environments. Unlike cloud, fog has little to no human intervention, and lack of redundancy. Hence, it could experience downtime at any given time. Thus it is harder to trust fogs given their unpredictable status. These distinguishing factors, combined with the existing factors used for trust evaluation in cloud can be used as metrics to evaluate trust in fog. This paper discusses a use case of a campus scenario with several fog servers, and the metrics used in evaluating the trustworthiness of the fog servers. While fuzzy logic method is used to evaluate the trust, the contribution of this study is the identification of fuzzy logic configurations that could alter the trust value of a fog.
This paper outlines the IoT Databox model as a means of making the Internet of Things (IoT) accountable to individuals. Accountability is a key to building consumer trust and mandated in data protection legislation. We briefly outline the `external' data subject accountability requirement specified in actual legislation in Europe and proposed legislation in the US, and how meeting requirement this turns on surfacing the invisible actions and interactions of connected devices and the social arrangements in which they are embedded. The IoT Databox model is proposed as an in principle means of enabling accountability and providing individuals with the mechanisms needed to build trust in the IoT.
Software Defined Networking (SDN) is an emerging paradigm that changes the way networks are managed by separating the control plane from data plane and making networks programmable. The separation brings about flexibility, automation, orchestration and offers savings in both capital and operational expenditure. Despite all the advantages offered by SDN it introduces new threats that did not exist before or were harder to exploit in traditional networks, making network penetration potentially easier. One of the key threat to SDN is the authentication and authorisation of network applications that control network behaviour (unlike the traditional network where network devices like routers and switches are autonomous and run proprietary software and protocols to control the network). This paper proposes a mechanism that helps the control layer authenticate network applications and set authorisation permissions that constrict manipulation of network resources.
Mobile ad-hoc networks (MANETs) are decentralized and self-organizing communication systems. They have become pervasive in the current technological framework. MANETs have become a vital solution to the services that need flexible establishments, dynamic and wireless connections such as military operations, healthcare systems, vehicular networks, mobile conferences, etc. Hence it is more important to estimate the trustworthiness of moving devices. In this research, we have proposed a model to improve a trusted routing in mobile ad-hoc networks by identifying malicious nodes. The proposed system uses Reinforcement Learning (RL) agent that learns to detect malicious nodes. The work focuses on a MANET with Ad-hoc On-demand Distance Vector (AODV) Protocol. Most of the systems were developed with the assumption of a small network with limited number of neighbours. But with the introduction of reinforcement learning concepts this work tries to minimize those limitations. The main objective of the research is to introduce a new model which has the capability to detect malicious nodes that decrease the performance of a MANET significantly. The malicious behaviour is simulated with black holes that move randomly across the network. After identifying the technology stack and concepts of RL, system design was designed and the implementation was carried out. Then tests were performed and defects and further improvements were identified. The research deliverables concluded that the proposed model arranges for highly accurate and reliable trust improvement by detecting malicious nodes in a dynamic MANET environment.
The Semantic Web today is a web that allows for intelligent knowledge retrieval by means of semantically annotated tags. This web also known as Intelligent web aims to provide meaningful information to man and machines equally. However, the information thus provided lacks the component of trust. Therefore we propose a method to embed trust in semantic web documents by the concept of provenance which provides answers to who, when, where and by whom the documents were created or modified. This paper demonstrates the same using the Manchester approach of provenance implemented in a University Ontology.
In theory, remote attestation is a powerful primitive for building distributed systems atop untrusting peers. Unfortunately, the canonical attestation framework defined by the Trusted Computing Group is insufficient to express rich contextual relationships between client-side software components. Thus, attestors and verifiers must rely on ad-hoc mechanisms to handle real-world attestation challenges like attestors that load executables in nondeterministic orders, or verifiers that require attestors to track dynamic information flows between attestor-side components. In this paper, we survey these practical attestation challenges. We then describe a new attestation framework, named Cobweb, which handles these challenges. The key insight is that real-world attestation is a graph problem. An attestation message is a graph in which each vertex is a software component, and has one or more labels, e.g., the hash value of the component, or the raw file data, or a signature over that data. Each edge in an attestation graph is a contextual relationship, like the passage of time, or a parent/child fork() relationship, or a sender/receiver IPC relationship. Cobweb's verifier-side policies are graph predicates which analyze contextual relationships. Experiments with real, complex software stacks demonstrate that Cobweb's abstractions are generic and can support a variety of real-world policies.
Due to the large quantity and diversity of content being easily available to users, recommender systems (RS) have become an integral part of nearly every online system. They allow users to resolve the information overload problem by proactively generating high-quality personalized recommendations. Trust metrics help leverage preferences of similar users and have led to improved predictive accuracy which is why they have become an important consideration in the design of RSs. We argue that there are additional aspects of trust as a human notion, that can be integrated with collaborative filtering techniques to suggest to users items that they might like. In this paper, we present an approach for the top-N recommendation task that computes prediction scores for items as a user specific combination of global and local trust models to capture differences in preferences. Our experiments show that the proposed method improves upon the standard trust model and outperforms competing top-N recommendation approaches on real world data by upto 19%.