Visible to the public Biblio

Filters: Keyword is trust computation  [Clear All Filters]
2020-10-12
Granatyr, Jones, Gomes, Heitor Murilo, Dias, João Miguel, Paiva, Ana Maria, Nunes, Maria Augusta Silveira Netto, Scalabrin, Edson Emílio, Spak, Fábio.  2019.  Inferring Trust Using Personality Aspects Extracted from Texts. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :3840–3846.
Trust mechanisms are considered the logical protection of software systems, preventing malicious people from taking advantage or cheating others. Although these concepts are widely used, most applications in this field do not consider affective aspects to aid in trust computation. Researchers of Psychology, Neurology, Anthropology, and Computer Science argue that affective aspects are essential to human's decision-making processes. So far, there is a lack of understanding about how these aspects impact user's trust, particularly when they are inserted in an evaluation system. In this paper, we propose a trust model that accounts for personality using three personality models: Big Five, Needs, and Values. We tested our approach by extracting personality aspects from texts provided by two online human-fed evaluation systems and correlating them to reputation values. The empirical experiments show statistically significant better results in comparison to non-personality-wise approaches.
2020-07-30
Reddy, Vijender Busi, Negi, Atul, Venkataraman, S, Venkataraman, V Raghu.  2019.  A Similarity based Trust Model to Mitigate Badmouthing Attacks in Internet of Things (IoT). 2019 IEEE 5th World Forum on Internet of Things (WF-IoT). :278—282.

In Internet of Things (IoT) each object is addressable, trackable and accessible on the Internet. To be useful, objects in IoT co-operate and exchange information. IoT networks are open, anonymous, dynamic in nature so, a malicious object may enter into the network and disrupt the network. Trust models have been proposed to identify malicious objects and to improve the reliability of the network. Recommendations in trust computation are the basis of trust models. Due to this, trust models are vulnerable to bad mouthing and collusion attacks. In this paper, we propose a similarity model to mitigate badmouthing and collusion attacks and show that proposed method efficiently removes the impact of malicious recommendations in trust computation.

2019-06-10
Singh, Prateek Kumar, Kar, Koushik.  2018.  Countering Control Message Manipulation Attacks on OLSR. Proceedings of the 19th International Conference on Distributed Computing and Networking. :22:1–22:9.

In this work we utilize a Reputation Routing Model (RRM), which we developed in an earlier work, to mitigate the impact of three different control message based blackhole attacks in Optimized Link State Routing (OLSR) for Mobile Ad Hoc Networks (MANETs). A malicious node can potentially introduce three types of blackhole attacks on OLSR, namely TC-Blackhole attack, HELLO-Blackhole attack and TC-HELLO-Blackhole attack, by modifying its TC and HELLO messages with false information and disseminating them in the network in order to fake its advertisement. This results in node(s) diverting their messages toward the malicious node, therefore posing great security risks. Our solution reduces the risk posed by such bad nodes in the network and tries to isolate such links by feeding correct link state information to OLSR. We evaluate the performance of our model by emulating network scenarios on Common Open Research Emulator (CORE) for static as well as dynamic topologies. From our findings, it is observed that our model diminishes the effect of all three blackhole attacks on OLSR protocol in terms of packet delivery rates, especially at static and low mobility.

2018-02-14
Jayasinghe, Upul, Lee, Hyun-Woo, Lee, Gyu Myoung.  2017.  A Computational Model to Evaluate Honesty in Social Internet of Things. Proceedings of the Symposium on Applied Computing. :1830–1835.
Trust in Social Internet of Things has allowed to open new horizons in collaborative networking, particularly by allowing objects to communicate with their service providers, based on their relationships analogy to human world. However, strengthening trust is a challenging task as it involves identifying several influential factors in each domain of social-cyber-physical systems in order to build a reliable system. In this paper, we address the issue of understanding and evaluating honesty that is an important trust metric in trustworthiness evaluation process in social networks. First, we identify and define several trust attributes, which affect directly to the honesty. Then, a subjective computational model is derived based on experiences of objects and opinions from friendly objects with respect to identified attributes. Based on the outputs of this model a final honest level is predicted using regression analysis. Finally, the effectiveness of our model is tested using simulations.