Biblio
With the development of Online Social Networks(OSNs), OSNs have been becoming very popular platforms to publish resources and to establish relationship with friends. However, due to the lack of prior knowledge of others, there are usually risks associated with conducting network activities, especially those involving money. Therefore, it will be necessary to quantify the trust relationship of users in OSNs, which can help users decide whether they can trust another user. In this paper, we present a novel method for evaluating trust in OSNs using knowledge graph (KG), which is the cornerstone of artificial intelligence. And we focus on the two contributions for trust evaluation in OSNs: (i) a novel method using RNN to quantify trustworthiness in OSNs, which is inspired by relationship prediction in KG; (ii) a Path Reliability Measuring algorithm (PRM) to decide the reliability of a path from the trustor to the trustee. The experiment result shows that our method is more effective than traditional methods.
Decision making in utilities, municipal, and energy companies depends on accurate and trustworthy weather information and predictions. Recently, crowdsourced personal weather stations (PWS) are being increasingly used to provide a higher spatial and temporal resolution of weather measurements. However, tools and methods to ensure the trustworthiness of the crowdsourced data in real-time are lacking. In this paper, we present a Reputation System for Crowdsourced Rainfall Networks (RSCRN) to assign trust scores to personal weather stations in a region. Using real PWS data from the Weather Underground service in the high flood risk region of Norfolk, Virginia, we evaluate the performance of the proposed RSCRN. The proposed method is able to converge to a confident trust score for a PWS within 10–20 observations after installation. Collectively, the results indicate that the trust score derived from the RSCRN can reflect the collective measure of trustworthiness to the PWS, ensuring both useful and trustworthy data for modeling and decision-making in the future.
In this paper, we study trust-related human factors in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We compare three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual LOA, the human operator chooses headings for a flocking swarm, issuing new headings as needed. In the fully autonomous LOA, the swarm is redirected automatically by changing headings using a search algorithm. In the mixed-initiative LOA, if performance declines, control is switched from human to swarm or swarm to human. The result of this work extends the current knowledge on human factors in swarm supervisory control. Specifically, the finding that the relationship between trust and performance improved for passively monitoring operators (i.e., improved situation awareness in higher LOAs) is particularly novel in its contradiction of earlier work. We also discover that operators switch the degree of autonomy when their trust in the swarm system is low. Last, our analysis shows that operator's preference for a lower LOA is confirmed for a new domain of swarm control.
Nowadays, The incorporation of different function of the network, as well as routing, administration, and security, is basic to the effective operation of a mobile circumstantial network these days, in MANET thought researchers manages the problems of QoS and security severally. Currently, each the aspects of security and QoS influence negatively on the general performance of the network once thought-about in isolation. In fact, it will influence the exceptionally operating of QoS and security algorithms and should influence the important and essential services needed within the MANET. Our paper outlines 2 accomplishments via; the accomplishment of security and accomplishment of quality. The direction towards achieving these accomplishments is to style and implement a protocol to suite answer for policy-based network administration, and methodologies for key administration and causing of IPsec in a very MANET.
The need to process the verity, volume and velocity of data generated by today's Internet of Things (IoT) devices has pushed both academia and the industry to investigate new architectural alternatives to support the new challenges. As a result, Edge Computing (EC) has emerged to address these issues, by placing part of the cloud resources (e.g., computation, storage, logic) closer to the edge of the network, which allows faster and context dependent data analysis and storage. However, as EC infrastructures grow, different providers who do not necessarily trust each other need to collaborate in order serve different IoT devices. In this context, EC infrastructures, IoT devices and the data transiting the network all need to be subject to identity and provenance checks, in order to increase trust and accountability. Each device/data in the network needs to be identified and the provenance of its actions needs to be tracked. In this paper, we propose a blockchain container based architecture that implements the W3C-PROV Data Model, to track identities and provenance of all orchestration decisions of a business network. This architecture provides new forms of interaction between the different stakeholders, which supports trustworthy transactions and leads to a new decentralized interaction model for IoT based applications.
In Vehicle-to-Vehicle (V2V) communications, malicious actors may spread false information to undermine the safety and efficiency of the vehicular traffic stream. Thus, vehicles must determine how to respond to the contents of messages which maybe false even though they are authenticated in the sense that receivers can verify contents were not tampered with and originated from a verifiable transmitter. Existing solutions to find appropriate actions are inadequate since they separately address trust and decision, require the honest majority (more honest ones than malicious), and do not incorporate driver preferences in the decision-making process. In this work, we propose a novel trust-aware decision-making framework without requiring an honest majority. It securely determines the likelihood of reported road events despite the presence of false data, and consequently provides the optimal decision for the vehicles. The basic idea of our framework is to leverage the implied effect of the road event to verify the consistency between each vehicle's reported data and actual behavior, and determine the data trustworthiness and event belief by integrating the Bayes' rule and Dempster Shafer Theory. The resulting belief serves as inputs to a utility maximization framework focusing on both safety and efficiency. This framework considers the two basic necessities of the Intelligent Transportation System and also incorporates drivers' preferences to decide the optimal action. Simulation results show the robustness of our framework under the multiple-vehicle attack, and different balances between safety and efficiency can be achieved via selecting appropriate human preference factors based on the driver's risk-taking willingness.
Social media has been one of the most efficacious and precise by speakers of public opinion. A strategy which sanctions the utilization and illustration of twitter data to conclude public conviction is discussed in this paper. Sentiments on exclusive entities with diverse strengths and intenseness are stated by public, where these sentiments are strenuously cognate to their personal mood and emotions. To examine the sentiments from natural language texts, addressing various opinions, a lot of methods and lexical resources have been propounded. A path for boosting twitter sentiment classification using various sentiment proportions as meta-level features has been proposed by this article. Analysis of tweets was done on the product iPhone 6.
Two-factor authentication (2FA) systems provide another layer of protection to users' accounts beyond password. Traditional hardware token based 2FA and software token based 2FA are not burdenless to users since they require users to read, remember, and type a onetime code in the process, and incur high costs in deployments or operations. Recent 2FA mechanisms such as Sound-Proof, reduce or eliminate users' interactions for the proof of the second factor; however, they are not designed to be used in certain settings (e.g., quiet environments or PCs without built-in microphones), and they are not secure in the presence of certain attacks (e.g., sound-danger attack and co-located attack). To address these problems, we propose Typing-Proof, a usable, secure and low-cost two-factor authentication mechanism. Typing-Proof is similar to software token based 2FA in a sense that it uses password as the first factor and uses a registered phone to prove the second factor. During the second-factor authentication procedure, it requires a user to type any random code on a login computer and authenticates the user by comparing the keystroke timing sequence of the random code recorded by the login computer with the sounds of typing random code recorded by the user's registered phone. Typing-Proof can be reliably used in any settings and requires zero user-phone interaction in the most cases. It is practically secure and immune to the existing attacks to recent 2FA mechanisms. In addition, Typing-Proof enables significant cost savings for both service providers and users.
Use of multi-objective probabilistic planning to synthesize behavior of CPSs can play an important role in engineering systems that must self-optimize for multiple quality objectives and operate under uncertainty. However, the reasoning behind automated planning is opaque to end-users. They may not understand why a particular behavior is generated, and therefore not be able to calibrate their confidence in the systems working properly. To address this problem, we propose a method to automatically generate verbal explanation of multi-objective probabilistic planning, that explains why a particular behavior is generated on the basis of the optimization objectives. Our explanation method involves describing objective values of a generated behavior and explaining any tradeoff made to reconcile competing objectives. We contribute: (i) an explainable planning representation that facilitates explanation generation, and (ii) an algorithm for generating contrastive justification as explanation for why a generated behavior is best with respect to the planning objectives. We demonstrate our approach on a mobile robot case study.
The Tularosa study was designed to understand how defensive deception—including both cyber and psychological—affects cyber attackers. Over 130 red teamers participated in a network penetration test over two days in which we controlled both the presence of and explicit mention of deceptive defensive techniques. To our knowledge, this represents the largest study of its kind ever conducted on a professional red team population. The design was conducted with a battery of questionnaires (e.g., experience, personality, etc.) and cognitive tasks (e.g., fluid intelligence, working memory, etc.), allowing for the characterization of a "typical" red teamer, as well as physiological measures (e.g., galvanic skin response, heart rate, etc.) to be correlated with the cyber events. This paper focuses on the design, implementation, population characteristics, lessons learned, and planned analyses.
Due to the changing nature of Mobile Ad-Hoc Network (MANET) security is an important concern and hence in this paper, we carryout vector-based trust mechanism, which is established on the behavior of nodes in forwarding and dropping the data packets determines the trust on each node and we are using the Enhanced Certificate Revocation scheme (ECR), which avoid the attacker by blacklisting the blackhole attacker. To enhance more security for node and network, we assign a unique key for every individual node which can avoid most of the attacks in MANET
Cyber physical systems are the key innovation driver for many domains such as automotive, avionics, industrial process control, and factory automation. However, their interconnection potentially provides adversaries easy access to sensitive data, code, and configurations. If attackers gain control, material damage or even harm to people must be expected. To counteract data theft, system manipulation and cyber-attacks, security mechanisms must be embedded in the cyber physical system. Adding hardware security in the form of the standardized Trusted Platform Module (TPM) is a promising approach. At the same time, traditional dependability features such as safety, availability, and reliability have to be maintained. To determine the right balance between security and dependability it is essential to understand their interferences. This paper supports developers in identifying the implications of using TPMs on the dependability of their system.We highlight potential consequences of adding TPMs to cyber-physical systems by considering the resulting safety, reliability, and availability. Furthermore, we discuss the potential of enhancing the dependability of TPM services by applying traditional redundancy techniques.
This project develops techniques to protect against sensor attacks on cyber-physical systems. Specifically, a resilient version of the Kalman filtering technique accompanied with a watermarking approach is proposed to detect cyber-attacks and estimate the correct state of the system. The defense techniques are used in conjunction and validated on two case studies: i) an unmanned ground vehicle (UGV) in which an attacker alters the reference angle and ii) a Cube Satellite (CubeSat) in which an attacker modifies the orientation of the satellite degrading its performance. Based on this work, we show that the proposed techniques in conjunction achieve better resiliency and defense capability than either technique alone against spoofing and replay attacks.
In medical human-robot interactions, trust plays an important role since for patients there may be more at stake than during other kinds of encounters with robots. In the current study, we address issues of trust in the interaction with a prototype of a therapeutic robot, the Universal RoboTrainer, in which the therapist records patient-specific tasks for the patient by means of kinesthetic guidance of the patients arm, which is connected to the robot. We carried out a user study with twelve pairs of participants who collaborate on recording a training program on the robot. We examine a) the degree with which participants identify the situation as uncomfortable or distressing, b) participants' own strategies to mitigate that stress, c) the degree to which the robot is held responsible for the problems occurring and the amount of agency ascribed to it, and d) when usability issues arise, what effect these have on participants' trust. We find signs of distress mostly in contexts with usability issues, as well as many verbal and kinesthetic mitigation strategies intuitively employed by the participants. Recommendations for robots to increase users' trust in kinesthetic interactions include the timely production of verbal cues that continuously confirm that everything is alright as well as increased contingency in the presentation of strategies for recovering from usability issues arising.
Smartphones have become ubiquitous in our everyday lives, providing diverse functionalities via millions of applications (apps) that are readily available. To achieve these functionalities, apps need to access and utilize potentially sensitive data, stored in the user's device. This can pose a serious threat to users' security and privacy, when considering malicious or underskilled developers. While application marketplaces, like Google Play store and Apple App store, provide factors like ratings, user reviews, and number of downloads to distinguish benign from risky apps, studies have shown that these metrics are not adequately effective. The security and privacy health of an application should also be considered to generate a more reliable and transparent trustworthiness score. In order to automate the trustworthiness assessment of mobile applications, we introduce the Trust4App framework, which not only considers the publicly available factors mentioned above, but also takes into account the Security and Privacy (S&P) health of an application. Additionally, it considers the S&P posture of a user, and provides an holistic personalized trustworthiness score. While existing automatic trustworthiness frameworks only consider trustworthiness indicators (e.g. permission usage, privacy leaks) individually, Trust4App is, to the best of our knowledge, the first framework to combine these indicators. We also implement a proof-of-concept realization of our framework and demonstrate that Trust4App provides a more comprehensive, intuitive and actionable trustworthiness assessment compared to existing approaches.
Industrial cluster is an important organization form and carrier of development of small and medium-sized enterprises, and information service platform is an important facility of industrial cluster. Improving the credibility of the network platform is conducive to eliminate the adverse effects of distrust and information asymmetry on industrial clusters. The decentralization, transparency, openness, and intangibility of block chain technology make it an inevitable choice for trustworthiness optimization of industrial cluster network platform. This paper first studied on trusted standard of industry cluster network platform and construct a new trusted framework of industry cluster network platform. Then the paper focus on trustworthiness optimization of data layer and application layer of the platform. The purpose of this paper is to build an industrial cluster network platform with data access, information trustworthiness, function availability, high-speed and low consumption, and promote the sustainable and efficient development of industrial cluster.
Currently, due to improvements in defensive systems network covert channels are increasingly drawing attention of cybercriminals and malware developers as they can provide stealthiness of the malicious communication and thus to bypass existing security solutions. On the other hand, the utilized data hiding methods are getting increasingly sophisticated as the attackers, in order to stay under the radar, distribute the covert data among many connections, protocols, etc. That is why, the detection of such threats becomes a pressing issue. In this paper we make an initial step in this direction by presenting a data mining-based detection of such advanced threats which relies on pattern discovery technique. The obtained, initial experimental results indicate that such solution has potential and should be further investigated.
Network covert channels are currently typically seen as a security threat which can result in e.g. confidential data leakage or in a hidden data exchange between malicious parties. However, in this paper we want to investigate network covert channels from a less obvious angle i.e. we want to verify whether it is possible to use them as a green networking technique. Our observation is that usually covert channels utilize various redundant "resources" in network protocols e.g. unused/reserved fields that would have been transmitted anyway. Therefore, using such "resources" for legitimate transmissions can increase the total available bandwidth without sending more packets and thus offering potential energy savings. However, it must be noted that embedding and extracting processes related to data hiding consumes energy, too. That is why, in this paper we try to establish whether the potentially saved energy due to covert channels utilization exceeds the effort needed to establish and maintain covert data transmission. For this purpose, a proof-of-concept implementation has been created to experimentally measure the impact of network covert channels on resulting energy consumption. The obtained results show that the approach can be useful mostly under specific circumstances, i.e., when the total energy consumption of the network devices is already relatively high. Furthermore, the impact of different types of network covert channels on the energy consumption is examined to assess their usefulness from the green networking perspective.
"Good Governance" - may it be corporate or governmental, is a badly needed focus area in the world today where the companies and governments are struggling to survive the political and economical turmoil around the globe. All governments around the world have a tendency of expanding the size of their government, but eventually they would be forced to think reducing the size by incorporating information technology as a way to provide services to the citizens effectively and efficiently. Hence our attempt is to offer a complete solution from birth of a citizen till death encompassing all the necessary services related to the well being of a person living in a society. Our research and analysis would explore the pros and cons of using IT as a solution to our problems and ways to implement them for a best outcome in e-Governance occasionally comparing with the present scenario when relevant.