Biblio
The labor market involves several untrusted actors with contradicting objectives. We propose a blockchain based system for labor market, which provides benefits to all participants in terms of confidence, transparency, trust and tracking. Our system would handle employment data through new Wavelet blockchain platform. It would change the job market enabling direct agreements between parties without other participants, and providing new mechanisms for negotiating the employment conditions. Furthermore, our system would reduce the need in existing paper workflow as well as in major internet recruiting companies. The key differences of our work from other blockchain based labor record systems are usage of Wavelet blockchain platform, which features metastability, directed acyclic graph system and Turing complete smart contracts platform and introduction of human interaction inside the smart contracts logic, instead of automatic execution of contracts. The results are promising while inconclusive and we would further explore potential of blockchain solutions for labor market problems.
Several computer vision applications such as object detection and face recognition have started to completely rely on deep learning based architectures. These architectures, when paired with appropriate loss functions and optimizers, produce state-of-the-art results in a myriad of problems. On the other hand, with the advent of "blockchain", the cybersecurity industry has developed a new sense of trust which was earlier missing from both the technical and commercial perspectives. Employment of cryptographic hash as well as symmetric/asymmetric encryption and decryption algorithms ensure security without any human intervention (i.e., centralized authority). In this research, we present the synergy between the best of both these worlds. We first propose a model which uses the learned parameters of a typical deep neural network and is secured from external adversaries by cryptography and blockchain technology. As the second contribution of the proposed research, a new parameter tampering attack is proposed to properly justify the role of blockchain in machine learning.
This paper discusses the possible effort to mitigate insider threats risk and aim to inspire organizations to consider identifying insider threats as one of the risks in the company's enterprise risk management activities. The paper suggests Trusted Human Framework (THF) as the on-going and cyclic process to detect and deter potential employees who bound to become the fraudster or perpetrator violating the access and trust given. The mitigation's control statements were derived from the recommended practices in the “Common Sense Guide to Mitigating Insider Threats” produced by the Software Engineering Institute, Carnegie Mellon University (SEI-CMU). The statements validated via a survey which was responded by fifty respondents who work in Malaysia.
Blockchains - with their inherent properties of transaction transparency, distributed consensus, immutability and cryptographic verifiability - are increasingly seen as a means to underpin innovative products and services in a range of sectors from finance through to energy and healthcare. Discussions, too often, make assertions that the trustless nature of blockchain technologies enables and actively promotes their suitability - there being no need to trust third parties or centralised control. Yet humans need to be able to trust systems, and others with whom the system enables transactions. In this paper, we highlight that understanding this need for trust is critical for the development of blockchain-based systems. Through an online study with 125 users of the most well-known of blockchain based systems - the cryptocurrency Bitcoin - we uncover that human and institutional aspects of trust are pervasive. Our analysis highlights that, when designing future blockchain-based technologies, we ought to not only consider computational trust but also the wider eco-system, how trust plays a part in users engaging/disengaging with such eco-systems and where design choices impact upon trust. From this, we distill a set of guidelines for software engineers developing blockchain-based systems for societal applications.
In socially assistive robotics, an important research area is the development of adaptation techniques and their effect on human-robot interaction. We present a meta-learning based policy gradient method for addressing the problem of adaptation in human-robot interaction and also investigate its role as a mechanism for trust modelling. By building an escape room scenario in mixed reality with a robot, we test our hypothesis that bi-directional trust can be influenced by different adaptation algorithms. We found that our proposed model increased the perceived trustworthiness of the robot and influenced the dynamics of gaining human's trust. Additionally, participants evaluated that the robot perceived them as more trustworthy during the interactions with the meta-learning based adaptation compared to the previously studied statistical adaptation model.
Humans often assume that robots are rational. We believe robots take optimal actions given their objective; hence, when we are uncertain about what the robot's objective is, we interpret the robot's actions as optimal with respect to our estimate of its objective. This approach makes sense when robots straightforwardly optimize their objective, and enables humans to learn what the robot is trying to achieve. However, our insight is that-when robots are aware that humans learn by trusting that the robot actions are rational-intelligent robots do not act as the human expects; instead, they take advantage of the human's trust, and exploit this trust to more efficiently optimize their own objective. In this paper, we formally model instances of human-robot interaction (HRI) where the human does not know the robot's objective using a two-player game. We formulate different ways in which the robot can model the uncertain human, and compare solutions of this game when the robot has conservative, optimistic, rational, and trusting human models. In an offline linear-quadratic case study and a real-time user study, we show that trusting human models can naturally lead to communicative robot behavior, which influences end-users and increases their involvement.
In order to improve the accuracy of similarity, an improved collaborative filtering algorithm based on trust and information entropy is proposed in this paper. Firstly, the direct trust between the users is determined by the user's rating to explore the potential trust relationship of the users. The time decay function is introduced to realize the dynamic portrayal of the user's interest decays over time. Secondly, the direct trust and the indirect trust are combined to obtain the overall trust which is weighted with the Pearson similarity to obtain the trust similarity. Then, the information entropy theory is introduced to calculate the similarity based on weighted information entropy. At last, the trust similarity and the similarity based on weighted information entropy are weighted to obtain the similarity combing trust and information entropy which is used to predicted the rating of the target user and create the recommendation. The simulation shows that the improved algorithm has a higher accuracy of recommendation and can provide more accurate and reliable recommendation service.
False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.
In theory, remote attestation is a powerful primitive for building distributed systems atop untrusting peers. Unfortunately, the canonical attestation framework defined by the Trusted Computing Group is insufficient to express rich contextual relationships between client-side software components. Thus, attestors and verifiers must rely on ad-hoc mechanisms to handle real-world attestation challenges like attestors that load executables in nondeterministic orders, or verifiers that require attestors to track dynamic information flows between attestor-side components. In this paper, we survey these practical attestation challenges. We then describe a new attestation framework, named Cobweb, which handles these challenges. The key insight is that real-world attestation is a graph problem. An attestation message is a graph in which each vertex is a software component, and has one or more labels, e.g., the hash value of the component, or the raw file data, or a signature over that data. Each edge in an attestation graph is a contextual relationship, like the passage of time, or a parent/child fork() relationship, or a sender/receiver IPC relationship. Cobweb's verifier-side policies are graph predicates which analyze contextual relationships. Experiments with real, complex software stacks demonstrate that Cobweb's abstractions are generic and can support a variety of real-world policies.
Due to the large quantity and diversity of content being easily available to users, recommender systems (RS) have become an integral part of nearly every online system. They allow users to resolve the information overload problem by proactively generating high-quality personalized recommendations. Trust metrics help leverage preferences of similar users and have led to improved predictive accuracy which is why they have become an important consideration in the design of RSs. We argue that there are additional aspects of trust as a human notion, that can be integrated with collaborative filtering techniques to suggest to users items that they might like. In this paper, we present an approach for the top-N recommendation task that computes prediction scores for items as a user specific combination of global and local trust models to capture differences in preferences. Our experiments show that the proposed method improves upon the standard trust model and outperforms competing top-N recommendation approaches on real world data by upto 19%.
Unmanned systems are increasing in number, while their manning requirements remain the same. To decrease manpower demands, machine learning techniques and autonomy are gaining traction and visibility. One barrier is human perception and understanding of autonomy. Machine learning techniques can result in “black box” algorithms that may yield high fitness, but poor comprehension by operators. However, Interactive Machine Learning (IML), a method to incorporate human input over the course of algorithm development by using neuro-evolutionary machine-learning techniques, may offer a solution. IML is evaluated here for its impact on developing autonomous team behaviors in an area search task. Initial findings show that IML-generated search plans were chosen over plans generated using a non-interactive ML technique, even though the participants trusted them slightly less. Further, participants discriminated each of the two types of plans from each other with a high degree of accuracy, suggesting the IML approach imparts behavioral characteristics into algorithms, making them more recognizable. Together the results lay the foundation for exploring how to team humans successfully with ML behavior.