Biblio
Modern Internet TCP uses Secure Sockets Layers (SSL)/Transport Layer Security (TLS) for secure communication, which relies on Public Key Infrastructure (PKIs) to authenticate public keys. Conventional PKI is done by Certification Authorities (CAs), issuing and storing Digital Certificates, which are public keys of users with the users identity. This leads to centralization of authority with the CAs and the storage of CAs being vulnerable and imposes a security concern. There have been instances in the past where CAs have issued rogue certificates or the CAs have been hacked to issue malicious certificates. Motivated from these facts, in this paper, we propose a method (named as Trustful), which aims to build a decentralized PKI using blockchain. Blockchains provide immutable storage in a decentralized manner and allows us to write smart contracts. Ethereum blockchain can be used to build a web of trust model where users can publish attributes, validate attributes about other users by signing them and creating a trust store of users that they trust. Trustful works on the Web-of-Trust (WoT) model and allows for any entity on the network to verify attributes about any other entity through a trusted network. This provides an alternative to the conventional CA-based identity verification model. The proposed model has been implemented and tested for efficacy and known major security attacks.
Today the integrity of digital documents and the authenticity of their origin is often hard to verify. Existing Public Key Infrastructures (PKIs) are capable of certifying digital identities but do not provide solutions to immutably store signatures, and the process of certification is often not transparent. In this work we propose Veritaa, a Distributed Public Key Infrastructure and Signature Store (DPKISS). The major innovation of Veritaa is the Graph of Trust, a directed graph that uses relations between identity claims to certify the identities and stores signed relations to digital document identifiers. The distributed architecture of Veritaa and the Graph of Trust enables a transparent certification process. To ensure non-repudiation and immutability of all actions that have been signed on the Graph of Trust, an application specific Distributed Ledger Technology (DLT) is used as secure storage. In this work a reference implementation of the proposed architecture was designed and implemented. Furthermore, a testbed was created and used for the evaluation of Veritaa. The evaluation of Veritaa shows the benefits and the high performance of the proposed architecture.
Today, Internet of Things (IoT) devices mostly operate in enclosed, proprietary environments. To unfold the full potential of IoT applications, a unifying and permissionless environment is crucial. All IoT devices, even unknown to each other, would be able to trade services and assets across various domains. In order to realize those applications, uniquely resolvable identities are essential. However, quantifiable trust in identities and their authentication are not trivially provided in such an environment due to the absence of a trusted authority. This research presents a new identity and trust framework for IoT devices, based on Distributed Ledger Technology (DLT). IoT devices assign identities to themselves, which are managed publicly and decentralized on the DLT's network as Self Sovereign Identities (SSI). In addition to the Identity Management System (IdMS), the framework provides a Web of Trust (WoT) approach to enable automatic trust rating of arbitrary identities. For the framework we used the IOTA Tangle to access and store data, achieving high scalability and low computational overhead. To demonstrate the feasibility of our framework, we provide a proof-of-concept implementation and evaluate the set objectives for real world applicability as well as the vulnerability against common threats in IdMSs and WoTs.
We propose an approach to enforce security in disruption- and delay-tolerant networks (DTNs) where long delays, high packet drop rates, unavailability of central trusted entity etc. make traditional approaches unfeasible. We use trust model based on subjective logic to continuously evaluate trustworthiness of security credentials issued in distributed manner by network participants to deal with absence of centralised trusted authorities.
We consider the automatic verification of information flow security policies of web-based workflows, such as conference submission systems like EasyChair. Our workflow description language allows for loops, non-deterministic choice, and an unbounded number of participating agents. The information flow policies are specified in a temporal logic for hyperproperties. We show that the verification problem can be reduced to the satisfiability of a formula of first-order linear-time temporal logic, and provide decidability results for relevant classes of workflows and specifications. We report on experimental results obtained with an implementation of our approach on a series of benchmarks.
Internet-of-Things devices often collect and transmit sensitive information like camera footage, health monitoring data, or whether someone is home. These devices protect data in transit with end-to-end encryption, typically using TLS connections between devices and associated cloud services. But these TLS connections also prevent device owners from observing what their own devices are saying about them. Unlike in traditional Internet applications, where the end user controls one end of a connection (e.g., their web browser) and can observe its communication, Internet-of-Things vendors typically control the software in both the device and the cloud. As a result, owners have no way to audit the behavior of their own devices, leaving them little choice but to hope that these devices are transmitting only what they should. This paper presents TLS–Rotate and Release (TLS-RaR), a system that allows device owners (e.g., consumers, security researchers, and consumer watchdogs) to authorize devices, called auditors, to decrypt and verify recent TLS traffic without compromising future traffic. Unlike prior work, TLS-RaR requires no changes to TLS's wire format or cipher suites, and it allows the device's owner to conduct a surprise inspection of recent traffic, without prior notice to the device that its communications will be audited.
Trust networks have been widely used to mitigate the data sparsity and cold-start problems of collaborative filtering. Recently, some approaches have been proposed which exploit explicit signed trust relationships, i.e., trust and distrust relationships. These approaches ignore the fact that users despite trusting/distrusting each other in a trust network may have different preferences in real-life. Most of these approaches also handle the notion of the transitivity of distrust as well as trust. However, other existing work observed that trust is transitive while distrust is intransitive. Moreover, explicit signed trust relationships are fairly sparse and may not contribute to infer true preferences of users. In this paper, we propose to create implicit signed trust relationships and exploit them along with explicit signed trust relationship to solve sparsity problem of trust relationships. We also confirm the similarity (resp. dissimilarity) of implicit and explicit trust (resp. distrust) relationships by using the similarity score between users so that users' true preferences can be inferred. In addition to these strategies, we also propose a matrix factorization model that simultaneously exploits implicit and explicit signed trust relationships along with rating information and also handles transitivity of trust and intransitivity of distrust. Extensive experiments on Epinions dataset show that the proposed approach outperforms existing approaches in terms of accuracy.
Client-side JavaScript has become ubiquitous in web applications to improve user experience and reduce server load. However, since clients are untrusted, servers cannot rely on the confidentiality or integrity of client-side JavaScript code and the data that it operates on. For example, client-side input validation must be repeated at server side, and confidential business logic cannot be offloaded. In this paper, we present TrustJS, a framework that enables trustworthy execution of security-sensitive JavaScript inside commodity browsers. TrustJS leverages trusted hardware support provided by Intel SGX to protect the client-side execution of JavaScript, enabling a flexible partitioning of web application code. We present the design of TrustJS and provide initial evaluation results, showing that trustworthy JavaScript offloading can further improve user experience and conserve more server resources.
As the Internet of Thing (IoT) matures, a lot of concerns are being raised about security, privacy and interoperability. The Web of Things (WoT) model leverages web technologies to improve interoperability. Due to its distributed components, the web scaled well beyond initial expectations. Still, secure authentication and communication across organization boundaries rely on the Public Key Infrastructure (PKI) which is a non-transparent, centralized single point of failure. We can improve transparency and reduce the chain of trust–-thus significantly improving the IoT security–-by empowering blockchain technology and web security standards. In this paper, we build a scalable, decentralized IoT-centric PKI and discuss how we can combine it with the emerging web authentication and authorization framework for constrained environments.
The Semantic Web today is a web that allows for intelligent knowledge retrieval by means of semantically annotated tags. This web also known as Intelligent web aims to provide meaningful information to man and machines equally. However, the information thus provided lacks the component of trust. Therefore we propose a method to embed trust in semantic web documents by the concept of provenance which provides answers to who, when, where and by whom the documents were created or modified. This paper demonstrates the same using the Manchester approach of provenance implemented in a University Ontology.
There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.
Recommender system is to suggest items that might be interest of the users in social networks. Collaborative filtering is an approach that works based on similarity and recommends items liked by other similar users. Trust model adopts users' trust network in place of similarity. Multi-faceted trust model considers multiple and heterogeneous trust relationship among the users and recommend items based on rating exist in the network of trustees of a specific facet. This paper applies genetic algorithm to estimate parameters of multi-faceted trust model, in which the trust weights are calculated based on the ratings and the trust network for each facet, separately. The model was built on Epinions data set that includes consumers' opinion, rating for items and the web of trust network. It was used to predict users' rating for items in different facets and root mean squared of prediction error (RMSE) was considered as a measure of performance. Empirical evaluations demonstrated that multi-facet models improve performance of the recommender system.
Trust Management (TM) systems for authentication are vital to the security of online interactions, which are ubiquitous in our everyday lives. Various systems, like the Web PKI (X.509) and PGP's Web of Trust are used to manage trust in this setting. In recent years, blockchain technology has been introduced as a panacea to our security problems, including that of authentication, without sufficient reasoning, as to its merits.In this work, we investigate the merits of using open distributed ledgers (ODLs), such as the one implemented by blockchain technology, for securing TM systems for authentication. We formally model such systems, and explore how blockchain can help mitigate attacks against them. After formal argumentation, we conclude that in the context of Trust Management for authentication, blockchain technology, and ODLs in general, can offer considerable advantages compared to previous approaches. Our analysis is, to the best of our knowledge, the first to formally model and argue about the security of TM systems for authentication, based on blockchain technology. To achieve this result, we first provide an abstract model for TM systems for authentication. Then, we show how this model can be conceptually encoded in a blockchain, by expressing it as a series of state transitions. As a next step, we examine five prevalent attacks on TM systems, and provide evidence that blockchain-based solutions can be beneficial to the security of such systems, by mitigating, or completely negating such attacks.
The rapid development of cloud computing has resulted in the emergence of numerous web services on the Internet. Selecting a suitable cloud service is becoming a major problem for users especially non-professionals. Quality of Service (QoS) is considered to be the criterion for judging web services. There are several Collaborative Filtering (CF)-based QoS prediction methods proposed in recent years. QoS values among different users may vary largely due to the network and geographical location. Moreover, QoS data provided by untrusted users will definitely affect the prediction accuracy. However, most existing methods seldom take both facts into consideration. In this paper, we present a trust-aware and location-based approach for web service QoS prediction. A trust value for each user is evaluated before the similarity calculation and the location is taken into account in similar neighbors selecting. A series of experiments are performed based on a realworld QoS dataset including 339 service users and 5,825 services. The experimental analysis shows that the accuracy of our method is much higher than other CF-based methods.
We study the problem of k-anonymization of mail messages in the realistic scenario of auditing mail traffic in a major commercial Web mail service. Mail auditing is necessary in various Web mail debugging and quality assurance activities, such as anti-spam or the qualitative evaluation of novel mail features. It is conducted by trained professionals, often referred to as "auditors", who are shown messages that could expose personally identifiable information. We address here the challenge of k-anonymizing such messages, focusing on machine generated mail messages that represent more than 90% of today's mail traffic. We introduce a novel message signature Mail-Hash, specifically tailored to identifying structurally-similar messages, which allows us to put such messages in a same equivalence class. We then define a process that generates, for each class, masked mail samples that can be shown to auditors, while guaranteeing the k-anonymity of users. The productivity of auditors is measured by the amount of non-hidden mail content they can see every day, while considering normal working conditions, which set a limit to the number of mail samples they can review. In addition, we consider k-anonymity over time since, by definition of k-anonymity, every new release places additional constraints on the assignment of samples. We describe in details the results we obtained over actual Yahoo mail traffic, and thus demonstrate that our methods are feasible at Web mail scale. Given the constantly growing concern of users over their email being scanned by others, we argue that it is critical to devise such algorithms that guarantee k-anonymity, and implement associated processes in order to restore the trust of mail users.
We live in the era of mobile computing. Mobile devices have more sensors and more capabilities than desktop computers. For any computing device that contains sensitive information and accesses the Internet, security is a major concern for both enterprises and end-users. Of the mobile devices commonly in The emphasis of this research focuses on to the ways in which the popular iOS and Android platforms handle permissions in an attempt to discern if there are any identifiable trends on either platform w.r.t. applications being over- or underprivileged.
In this paper, we present a multi-featured supervised automatic keyword extraction system. We extracted salient semantic features which are descriptive of candidate keyphrases, a Random Forest classifier was used for training. The system achieved an accuracy of 58.3 % precision and has shown to outperform two top performing systems when benchmarked on a crowdsourced dataset. Furthermore, our approach achieved a personal best Precision and F-measure score of 32.7 and 25.5 respectively on the Semeval Keyphrase extraction challenge dataset. The paper describes the approaches used as well as the result obtained.