Visible to the public Biblio

Filters: Keyword is information trust  [Clear All Filters]
2022-08-10
Perarasi, T., Vidhya, S., Moses M., Leeban, Ramya, P..  2020.  Malicious Vehicles Identifying and Trust Management Algorithm for Enhance the Security in 5G-VANET. 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA). :269—275.
In this fifth generation of vehicular communication, the security against various malicious attacks are achieved by using malicious vehicles identification and trust management (MAT) algorithm. Basically, the proposed MAT algorithm performs in two dimensions, they are (i) Node trust and (ii) information trust accompanied with a digital signature and hash chain concept. In node trust, the MAT algorithm introduces the special form of key exchanging algorithm to every members of public group key, and later the vehicles with same target location are formed into cluster. The public group key is common for each participant but everyone maintain their own private key to produce the secret key. The proposed MAT algorithm, convert the secrete key into some unique form that allows the CMs (cluster members) to decipher that secrete key by utilizing their own private key. This key exchanging algorithm is useful to prevent the various attacks, like impersonate attack, man in middle attack, etc. In information trust, the MAT algorithm assigns some special nodes (it has common distance from both vehicles) for monitoring the message forwarding activities as well as routing behavior at particular time. This scheme is useful to predict an exact intruder and after time out the special node has dropped all the information. The proposed MAT algorithm accurately evaluates the trustworthiness of each node as well as information to control different attacks and become efficient for improving a group lifetime, stability of cluster, and vehicles that are located on their target place at correct time.
2020-04-13
Horne, Benjamin D., Gruppi, Mauricio, Adali, Sibel.  2019.  Trustworthy Misinformation Mitigation with Soft Information Nudging. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :245–254.

Research in combating misinformation reports many negative results: facts may not change minds, especially if they come from sources that are not trusted. Individuals can disregard and justify lies told by trusted sources. This problem is made even worse by social recommendation algorithms which help amplify conspiracy theories and information confirming one's own biases due to companies' efforts to optimize for clicks and watch time over individuals' own values and public good. As a result, more nuanced voices and facts are drowned out by a continuous erosion of trust in better information sources. Most misinformation mitigation techniques assume that discrediting, filtering, or demoting low veracity information will help news consumers make better information decisions. However, these negative results indicate that some news consumers, particularly extreme or conspiracy news consumers will not be helped. We argue that, given this background, technology solutions to combating misinformation should not simply seek facts or discredit bad news sources, but instead use more subtle nudges towards better information consumption. Repeated exposure to such nudges can help promote trust in better information sources and also improve societal outcomes in the long run. In this article, we will talk about technological solutions that can help us in developing such an approach, and introduce one such model called Trust Nudging.