Visible to the public Biblio

Filters: Keyword is false trust  [Clear All Filters]
2021-06-28
Liu, Jia, Fu, Hongchuan, Chen, Yunhua, Shi, Zhiping.  2020.  A Trust-based Message Passing Algorithm against Persistent SSDF. 2020 IEEE 20th International Conference on Communication Technology (ICCT). :1112–1115.
As a key technology in cognitive radio, cooperative spectrum sensing has been paid more and more attention. In cooperative spectrum sensing, multi-user cooperative spectrum sensing can effectively alleviate the performance degradation caused by multipath effect and shadow fading, and improve the spectrum utilization. However, as there may be malicious users in the cooperative sensing users, sending forged false messages to the fusion center or neighbor nodes to mislead them to make wrong judgments, which will greatly reduce the spectrum utilization. To solve this problem, this paper proposes an intelligent anti spectrum sensing data falsification (SSDF) attack algorithm using trust-based non consensus message passing algorithm. In this scheme, only one perception is needed, and the historical propagation path of each message is taken as the basis to calculate the reputation of each cognitive user. Every time a node receives different messages from the same cognitive user, there must be malicious users in its propagation path. We reward the nodes that appear more times in different paths with reputation value, and punish the nodes that appear less. Finally, the real value of the tampered message is restored according to the calculated reputation value. The MATLAB results show that the proposed scheme has a high recovery rate for messages and can identify malicious users in the network at the same time.
Dahiya, Rohan, Jiang, Frank, Doss, Robin Ram.  2020.  A Feedback-Driven Lightweight Reputation Scheme for IoV. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1060–1068.
Most applications of Internet of Vehicles (IoVs) rely on collaboration between nodes. Therefore, false information flow in-between these nodes poses the challenging trust issue in rapidly moving IoV nodes. To resolve this issue, a number of mechanisms have been proposed in the literature for the detection of false information and establishment of trust in IoVs, most of which employ reputation scores as one of the important factors. However, it is critical to have a robust and consistent scheme that is suitable to aggregate a reputation score for each node based on the accuracy of the shared information. Such a mechanism has therefore been proposed in this paper. The proposed system utilises the results of any false message detection method to generate and share feedback in the network, this feedback is then collected and filtered to remove potentially malicious feedback in order to produce a dynamic reputation score for each node. The reputation system has been experimentally validated and proved to have high accuracy in the detection of malicious nodes sending false information and is robust or negligibly affected in the presence of spurious feedback.
Oualhaj, Omar Ait, Mohamed, Amr, Guizani, Mohsen, Erbad, Aiman.  2020.  Blockchain Based Decentralized Trust Management framework. 2020 International Wireless Communications and Mobile Computing (IWCMC). :2210–2215.
The blockchain is a storage technology and transmission of information, transparent, secure, and operating without central control. In this paper, we propose a new decentralized trust management and cooperation model where data is shared via blockchain and we explore the revenue distribution under different consensus schemes. To reduce the power calculation with respect to the control mechanism, our proposal adopts the possibility of Proof on Trust (PoT) and Proof of proof-of-stake based trust to replace the proof of work (PoW) scheme, to carry out the mining and storage of new data blocks. To detect nodes with malicious behavior to provide false system information, the trust updating algorithm is proposed..
Hannum, Corey, Li, Rui, Wang, Weitian.  2020.  Trust or Not?: A Computational Robot-Trusting-Human Model for Human-Robot Collaborative Tasks 2020 IEEE International Conference on Big Data (Big Data). :5689–5691.
The trust of a robot in its human partner is a significant issue in human-robot interaction, which is seldom explored in the field of robotics. This study addresses a critical issue of robots' trust in humans during the human-robot collaboration process based on the data of human motions, past interactions of the human-robot pair, and the human's current performance in the co-carry task. The trust level is evaluated dynamically throughout the collaborative task that allows the trust level to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Experimental results showed that the robot effectively assisted the human in collaborative tasks through the proposed computational trust model.
2021-06-24
Su, Yu, Zhou, Jian, Guo, Zhinuan.  2020.  A Trust-Based Security Scheme for 5G UAV Communication Systems. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :371—374.
As the increasing demands of social services, unmanned aerial vehicles (UAVs)-assisted networks promote the promising prospect for implementing high-rate information transmission and applications. The sensing data can be collected by UAVs, a large number of applications based on UAVs have been realized in the 5G networks. However, the malicious UAVs may provide false information and destroy the services. The 5G UAV communication systems face the security threats. Therefore, this paper develops a novel trust-based security scheme for 5G UAV communication systems. Firstly, the architecture of the 5G UAV communication system is presented to improve the communication performance. Secondly, the trust evaluation scheme for UAVs is developed to evaluate the reliability of UAVs. By introducing the trust threshold, the malicious UAVs will be filtered out from the systems to protect the security of systems. Finally, the simulation results have been demonstrated the effectiveness of the proposed scheme.
2020-06-19
Chandra, Yogesh, Jana, Antoreep.  2019.  Improvement in Phishing Websites Detection Using Meta Classifiers. 2019 6th International Conference on Computing for Sustainable Global Development (INDIACom). :637—641.

In the era of the ever-growing number of smart devices, fraudulent practices through Phishing Websites have become an increasingly severe threat to modern computers and internet security. These websites are designed to steal the personal information from the user and spread over the internet without the knowledge of the user using the system. These websites give a false impression of genuinity to the user by mirroring the real trusted web pages which then leads to the loss of important credentials of the user. So, Detection of such fraudulent websites is an essence and the need of the hour. In this paper, various classifiers have been considered and were found that ensemble classifiers predict to utmost efficiency. The idea behind was whether a combined classifier model performs better than a single classifier model leading to a better efficiency and accuracy. In this paper, for experimentation, three Meta Classifiers, namely, AdaBoostM1, Stacking, and Bagging have been taken into consideration for performance comparison. It is found that Meta Classifier built by combining of simple classifier(s) outperform the simple classifier's performance.

Gu, Chongyan, Chang, Chip Hong, Liu, Weiqiang, Yu, Shichao, Ma, Qingqing, O'Neill, Maire.  2019.  A Modeling Attack Resistant Deception Technique for Securing PUF based Authentication. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :1—6.

Due to practical constraints in preventing phishing through public network or insecure communication channels, simple physical unclonable function (PDF)-based authentication protocol with unrestricted queries and transparent responses is vulnerable to modeling and replay attacks. In this paper, we present a PUF-based authentication method to mitigate the practical limitations in applications where a resource-rich server authenticates a device with no strong restriction imposed on the type of PUF designs or any additional protection on the binary channel used for the authentication. Our scheme uses an active deception protocol to prevent machine learning (ML) attacks on a device. The monolithic system makes collection of challenge response pairs (CRPs) easy for model building during enrollment but prohibitively time consuming upon device deployment. A genuine server can perform a mutual authentication with the device at any time with a combined fresh challenge contributed by both the server and the device. The message exchanged in clear does not expose the authentic CRPs. The false PUF multiplexing is fortified against prediction of waiting time by doubling the time penalty for every unsuccessful authentication.

Wang, Si, Liu, Wenye, Chang, Chip-Hong.  2019.  Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :1—6.

Deep learning is a popular powerful machine learning solution to the computer vision tasks. The most criticized vulnerability of deep learning is its poor tolerance towards adversarial images obtained by deliberately adding imperceptibly small perturbations to the clean inputs. Such negatives can delude a classifier into wrong decision making. Previous defensive techniques mostly focused on refining the models or input transformation. They are either implemented only with small datasets or shown to have limited success. Furthermore, they are rarely scrutinized from the hardware perspective despite Artificial Intelligence (AI) on a chip is a roadmap for embedded intelligence everywhere. In this paper we propose a new discriminative noise injection strategy to adaptively select a few dominant layers and progressively discriminate adversarial from benign inputs. This is made possible by evaluating the differences in label change rate from both adversarial and natural images by injecting different amount of noise into the weights of individual layers in the model. The approach is evaluated on the ImageNet Dataset with 8-bit truncated models for the state-of-the-art DNN architectures. The results show a high detection rate of up to 88.00% with only approximately 5% of false positive rate for MobileNet. Both detection rate and false positive rate have been improved well above existing advanced defenses against the most practical noninvasive universal perturbation attack on deep learning based AI chip.

Lai, Chengzhe, Du, Yangyang, Men, Jiawei, Zheng, Dong.  2019.  A Trust-based Real-time Map Updating Scheme. 2019 IEEE/CIC International Conference on Communications in China (ICCC). :334—339.

The real-time map updating enables vehicles to obtain accurate and timely traffic information. Especially for driverless cars, real-time map updating can provide high-precision map service to assist the navigation, which requires vehicles to actively upload the latest road conditions. However, due to the untrusted network environment, it is difficult for the real-time map updating server to evaluate the authenticity of the road information from the vehicles. In order to prevent malicious vehicles from deliberately spreading false information and protect the privacy of vehicles from tracking attacks, this paper proposes a trust-based real-time map updating scheme. In this scheme, the public key is used as the identifier of the vehicle for anonymous communication with conditional anonymity. In addition, the blockchain is applied to provide the existence proof for the public key certificate of the vehicle. At the same time, to avoid the spread of false messages, a trust evaluation algorithm is designed. The fog node can validate the received massages from vehicles using Bayesian Inference Model. Based on the verification results, the road condition information is sent to the real-time map updating server so that the server can update the map in time and prevent the secondary traffic accident. In order to calculate the trust value offset for the vehicle, the fog node generates a rating for each message source vehicle, and finally adds the relevant data to the blockchain. According to the result of security analysis, this scheme can guarantee the anonymity and prevent the Sybil attack. Simulation results show that the proposed scheme is effective and accurate in terms of real-time map updating and trust values calculating.

Baras, John S., Liu, Xiangyang.  2019.  Trust is the Cure to Distributed Consensus with Adversaries. 2019 27th Mediterranean Conference on Control and Automation (MED). :195—202.

Distributed consensus is a prototypical distributed optimization and decision making problem in social, economic and engineering networked systems. In collaborative applications investigating the effects of adversaries is a critical problem. In this paper we investigate distributed consensus problems in the presence of adversaries. We combine key ideas from distributed consensus in computer science on one hand and in control systems on the other. The main idea is to detect Byzantine adversaries in a network of collaborating agents who have as goal reaching consensus, and exclude them from the consensus process and dynamics. We describe a novel trust-aware consensus algorithm that integrates the trust evaluation mechanism into the distributed consensus algorithm and propose various local decision rules based on local evidence. To further enhance the robustness of trust evaluation itself, we also introduce a trust propagation scheme in order to take into account evidences of other nodes in the network. The resulting algorithm is flexible and extensible, and can incorporate more complex designs of decision rules and trust models. To demonstrate the power of our trust-aware algorithm, we provide new theoretical security performance results in terms of miss detection and false alarm rates for regular and general trust graphs. We demonstrate through simulations that the new trust-aware consensus algorithm can effectively detect Byzantine adversaries and can exclude them from consensus iterations even in sparse networks with connectivity less than 2f+1, where f is the number of adversaries.

Cha, Suhyun, Ulbrich, Mattias, Weigl, Alexander, Beckert, Bernhard, Land, Kathrin, Vogel-Heuser, Birgit.  2019.  On the Preservation of the Trust by Regression Verification of PLC software for Cyber-Physical Systems of Systems. 2019 IEEE 17th International Conference on Industrial Informatics (INDIN). 1:413—418.

Modern large scale technical systems often face iterative changes on their behaviours with the requirement of validated quality which is not easy to achieve completely with traditional testing. Regression verification is a powerful tool for the formal correctness analysis of software-driven systems. By proving that a new revision of the software behaves similarly as the original version of the software, some of the trust that the old software and system had earned during the validation processes or operation histories can be inherited to the new revision. This trust inheritance by the formal analysis relies on a number of implicit assumptions which are not self-evident but easy to miss, and may lead to a false sense of safety induced by a misunderstood regression verification processes. This paper aims at pointing out hidden, implicit assumptions of regression verification in the context of cyber-physical systems by making them explicit using practical examples. The explicit trust inheritance analysis would clarify for the engineers to understand the extent of the trust that regression verification provides and consequently facilitate them to utilize this formal technique for the system validation.

Eziama, Elvin, Ahmed, Saneeha, Ahmed, Sabbir, Awin, Faroq, Tepe, Kemal.  2019.  Detection of Adversary Nodes in Machine-To-Machine Communication Using Machine Learning Based Trust Model. 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). :1—6.

Security challenges present in Machine-to-Machine Communication (M2M-C) and big data paradigm are fundamentally different from conventional network security challenges. In M2M-C paradigms, “Trust” is a vital constituent of security solutions that address security threats and for such solutions,it is important to quantify and evaluate the amount of trust in the information and its source. In this work, we focus on Machine Learning (ML) Based Trust (MLBT) evaluation model for detecting malicious activities in a vehicular Based M2M-C (VBM2M-C) network. In particular, we present an Entropy Based Feature Engineering (EBFE) coupled Extreme Gradient Boosting (XGBoost) model which is optimized with Binary Particle Swarm optimization technique. Based on three performance metrics, i.e., Accuracy Rate (AR), True Positive Rate (TPR), False Positive Rate (FPR), the effectiveness of the proposed method is evaluated in comparison to the state-of-the-art ensemble models, such as XGBoost and Random Forest. The simulation results demonstrates the superiority of the proposed model with approximately 10% improvement in accuracy, TPR and FPR, with reference to the attacker density of 30% compared with the start-of-the-art algorithms.

Haefner, Kyle, Ray, Indrakshi.  2019.  ComplexIoT: Behavior-Based Trust For IoT Networks. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :56—65.

This work takes a novel approach to classifying the behavior of devices by exploiting the single-purpose nature of IoT devices and analyzing the complexity and variance of their network traffic. We develop a formalized measurement of complexity for IoT devices, and use this measurement to precisely tune an anomaly detection algorithm for each device. We postulate that IoT devices with low complexity lead to a high confidence in their behavioral model and have a correspondingly more precise decision boundary on their predicted behavior. Conversely, complex general purpose devices have lower confidence and a more generalized decision boundary. We show that there is a positive correlation to our complexity measure and the number of outliers found by an anomaly detection algorithm. By tuning this decision boundary based on device complexity we are able to build a behavioral framework for each device that reduces false positive outliers. Finally, we propose an architecture that can use this tuned behavioral model to rank each flow on the network and calculate a trust score ranking of all traffic to and from a device which allows the network to autonomously make access control decisions on a per-flow basis.

Chen, Yanping, Ma, Long, Xia, Hong, Gao, Cong, Wang, Zhongmin, Yu, Zhong.  2019.  Trust-Based Distributed Kalman Filter Estimation Fusion under Malicious Cyber Attacks. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :2255—2260.

We consider distributed Kalman filter for dynamic state estimation over wireless sensor networks. It is promising but challenging when network is under cyber attacks. Since the information exchange between nodes, the malicious attacks quickly spread across the entire network, which causing large measurement errors and even to the collapse of sensor networks. Aiming at the malicious network attack, a trust-based distributed processing frame is proposed. Which allows neighbor nodes to exchange information, and a series of trusted nodes are found using truth discovery. As a demonstration, distributed Cooperative Localization is considered, and numerical results are provided to evaluate the performance of the proposed approach by considering random, false data injection and replay attacks.

Chowdhury, Abdullahi, Karmakar, Gour, Kamruzzaman, Joarder.  2019.  Trusted Autonomous Vehicle: Measuring Trust using On-Board Unit Data. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :787—792.

Vehicular Ad-hoc Networks (VANETs) play an essential role in ensuring safe, reliable and faster transportation with the help of an Intelligent Transportation system. The trustworthiness of vehicles in VANETs is extremely important to ensure the authenticity of messages and traffic information transmitted in extremely dynamic topographical conditions where vehicles move at high speed. False or misleading information may cause substantial traffic congestions, road accidents and may even cost lives. Many approaches exist in literature to measure the trustworthiness of GPS data and messages of an Autonomous Vehicle (AV). To the best of our knowledge, they have not considered the trustworthiness of other On-Board Unit (OBU) components of an AV, along with GPS data and transmitted messages, though they have a substantial relevance in overall vehicle trust measurement. In this paper, we introduce a novel model to measure the overall trustworthiness of an AV considering four different OBU components additionally. The performance of the proposed method is evaluated with a traffic simulation model developed by Simulation of Urban Mobility (SUMO) using realistic traffic data and considering different levels of uncertainty.

2019-08-05
Kaiafas, G., Varisteas, G., Lagraa, S., State, R., Nguyen, C. D., Ries, T., Ourdane, M..  2018.  Detecting Malicious Authentication Events Trustfully. NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium. :1-6.

Anomaly detection on security logs is receiving more and more attention. Authentication events are an important component of security logs, and being able to produce trustful and accurate predictions minimizes the effort of cyber-experts to stop false attacks. Observed events are classified into Normal, for legitimate user behavior, and Malicious, for malevolent actions. These classes are consistently excessively imbalanced which makes the classification problem harder; in the commonly used Los Alamos dataset, the malicious class comprises only 0.00033% of the total. This work proposes a novel method to extract advanced composite features, and a supervised learning technique for classifying authentication logs trustfully; the models are Random Forest, LogitBoost, Logistic Regression, and ultimately Majority Voting which leverages the predictions of the previous models and gives the final prediction for each authentication event. We measure the performance of our experiments by using the False Negative Rate and False Positive Rate. In overall we achieve 0 False Negative Rate (i.e. no attack was missed), and on average a False Positive Rate of 0.0019.

Sun, M., Li, M., Gerdes, R..  2018.  Truth-Aware Optimal Decision-Making Framework with Driver Preferences for V2V Communications. 2018 IEEE Conference on Communications and Network Security (CNS). :1-9.

In Vehicle-to-Vehicle (V2V) communications, malicious actors may spread false information to undermine the safety and efficiency of the vehicular traffic stream. Thus, vehicles must determine how to respond to the contents of messages which maybe false even though they are authenticated in the sense that receivers can verify contents were not tampered with and originated from a verifiable transmitter. Existing solutions to find appropriate actions are inadequate since they separately address trust and decision, require the honest majority (more honest ones than malicious), and do not incorporate driver preferences in the decision-making process. In this work, we propose a novel trust-aware decision-making framework without requiring an honest majority. It securely determines the likelihood of reported road events despite the presence of false data, and consequently provides the optimal decision for the vehicles. The basic idea of our framework is to leverage the implied effect of the road event to verify the consistency between each vehicle's reported data and actual behavior, and determine the data trustworthiness and event belief by integrating the Bayes' rule and Dempster Shafer Theory. The resulting belief serves as inputs to a utility maximization framework focusing on both safety and efficiency. This framework considers the two basic necessities of the Intelligent Transportation System and also incorporates drivers' preferences to decide the optimal action. Simulation results show the robustness of our framework under the multiple-vehicle attack, and different balances between safety and efficiency can be achieved via selecting appropriate human preference factors based on the driver's risk-taking willingness.

Severson, T., Rodriguez-Seda, E., Kiriakidis, K., Croteau, B., Krishnankutty, D., Robucci, R., Patel, C., Banerjee, N..  2018.  Trust-Based Framework for Resilience to Sensor-Targeted Attacks in Cyber-Physical Systems. 2018 Annual American Control Conference (ACC). :6499-6505.

Networked control systems improve the efficiency of cyber-physical plants both functionally, by the availability of data generated even in far-flung locations, and operationally, by the adoption of standard protocols. A side-effect, however, is that now the safety and stability of a local process and, in turn, of the entire plant are more vulnerable to malicious agents. Leveraging the communication infrastructure, the authors here present the design of networked control systems with built-in resilience. Specifically, the paper addresses attacks known as false data injections that originate within compromised sensors. In the proposed framework for closed-loop control, the feedback signal is constructed by weighted consensus of estimates of the process state gathered from other interconnected processes. Observers are introduced to generate the state estimates from the local data. Side-channel monitors are attached to each primary sensor in order to assess proper code execution. These monitors provide estimates of the trust assigned to each observer output and, more importantly, independent of it; these estimates serve as weights in the consensus algorithm. The authors tested the concept on a multi-sensor networked physical experiment with six primary sensors. The weighted consensus was demonstrated to yield a feedback signal within specified accuracy even if four of the six primary sensors were injecting false data.

Ma, S., Zeng, S., Guo, J..  2018.  Research on Trust Degree Model of Fault Alarms Based on Neural Network. 2018 12th International Conference on Reliability, Maintainability, and Safety (ICRMS). :73-77.

False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.

Gerard, B., Rebaï, S. B., Voos, H., Darouach, M..  2018.  Cyber Security and Vulnerability Analysis of Networked Control System Subject to False-Data Injection. 2018 Annual American Control Conference (ACC). :992-997.

In the present paper, the problem of networked control system (NCS) cyber security is considered. The geometric approach is used to evaluate the security and vulnerability level of the controlled system. The proposed results are about the so-called false data injection attacks and show how imperfectly known disturbances can be used to perform undetectable, or at least stealthy, attacks that can make the NCS vulnerable to attacks from malicious outsiders. A numerical example is given to illustrate the approach.

Ghugar, U., Pradhan, J..  2018.  NL-IDS: Trust Based Intrusion Detection System for Network Layer in Wireless Sensor Networks. 2018 Fifth International Conference on Parallel, Distributed and Grid Computing (PDGC). :512-516.

From the last few years, security in wireless sensor network (WSN) is essential because WSN application uses important information sharing between the nodes. There are large number of issues raised related to security due to open deployment of network. The attackers disturb the security system by attacking the different protocol layers in WSN. The standard AODV routing protocol faces security issues when the route discovery process takes place. The data should be transmitted in a secure path to the destination. Therefore, to support the process we have proposed a trust based intrusion detection system (NL-IDS) for network layer in WSN to detect the Black hole attackers in the network. The sensor node trust is calculated as per the deviation of key factor at the network layer based on the Black hole attack. We use the watchdog technique where a sensor node continuously monitors the neighbor node by calculating a periodic trust value. Finally, the overall trust value of the sensor node is evaluated by the gathered values of trust metrics of the network layer (past and previous trust values). This NL-IDS scheme is efficient to identify the malicious node with respect to Black hole attack at the network layer. To analyze the performance of NL-IDS, we have simulated the model in MATLAB R2015a, and the result shows that NL-IDS is better than Wang et al. [11] as compare of detection accuracy and false alarm rate.

Ahmad, F., Adnane, A., KURUGOLLU, F., Hussain, R..  2019.  A Comparative Analysis of Trust Models for Safety Applications in IoT-Enabled Vehicular Networks. 2019 Wireless Days (WD). :1-8.
Vehicular Ad-hoc NETwork (VANET) is a vital transportation technology that facilitates the vehicles to share sensitive information (such as steep-curve warnings and black ice on the road) with each other and with the surrounding infrastructure in real-time to avoid accidents and enable comfortable driving experience.To achieve these goals, VANET requires a secure environment for authentic, reliable and trusted information dissemination among the network entities. However, VANET is prone to different attacks resulting in the dissemination of compromised/false information among network nodes. One way to manage a secure and trusted network is to introduce trust among the vehicular nodes. To this end, various Trust Models (TMs) are developed for VANET and can be broadly categorized into three classes, Entity-oriented Trust Models (ETM), Data oriented Trust Models (DTM) and Hybrid Trust Models (HTM). These TMs evaluate trust based on the received information (data), the vehicle (entity) or both through different mechanisms. In this paper, we present a comparative study of the three TMs. Furthermore, we evaluate these TMs against the different trust, security and quality-of-service related benchmarks. Simulation results revealed that all these TMs have deficiencies in terms of end-to-end delays, event detection probabilities and false positive rates. This study can be used as a guideline for researchers to design new efficient and effective TMs for VANET.
Tofighi-Shirazi, Ramtine, Christofi, Maria, Elbaz-Vincent, Philippe, Le, Thanh-ha.  2018.  DoSE: Deobfuscation Based on Semantic Equivalence. Proceedings of the 8th Software Security, Protection, and Reverse Engineering Workshop. :1:1-1:12.

Software deobfuscation is a key challenge in malware analysis to understand the internal logic of the code and establish adequate countermeasures. In order to defeat recent obfuscation techniques, state-of-the-art generic deobfuscation methodologies are based on dynamic symbolic execution (DSE). However, DSE suffers from limitations such as code coverage and scalability. In the race to counter and remove the most advanced obfuscation techniques, there is a need to reduce the amount of code to cover. To that extend, we propose a novel deobfuscation approach based on semantic equivalence, called DoSE. With DoSE, we aim to improve and complement DSE-based deobfuscation techniques by statically eliminating obfuscation transformations (built on code-reuse). This improves the code coverage. Our method's novelty comes from the transposition of existing binary diffing techniques, namely semantic equivalence checking, to the purpose of the deobfuscation of untreated techniques, such as two-way opaque constructs, that we encounter in surreptitious software. In order to challenge DoSE, we used both known malwares such as Cryptowall, WannaCry, Flame and BitCoinMiner and obfuscated code samples. Our experimental results show that DoSE is an efficient strategy of detecting obfuscation transformations based on code-reuse with low rates of false positive and/or false negative results in practice, and up to 63% of code reduction on certain types of malwares.

Nabipourshiri, Rouzbeh, Abu-Salih, Bilal, Wongthongtham, Pornpit.  2018.  Tree-Based Classification to Users' Trustworthiness in OSNs. Proceedings of the 2018 10th International Conference on Computer and Automation Engineering. :190-194.

In the light of the information revolution, and the propagation of big social data, the dissemination of misleading information is certainly difficult to control. This is due to the rapid and intensive flow of information through unconfirmed sources under the propaganda and tendentious rumors. This causes confusion, loss of trust between individuals and groups and even between governments and their citizens. This necessitates a consolidation of efforts to stop penetrating of false information through developing theoretical and practical methodologies aim to measure the credibility of users of these virtual platforms. This paper presents an approach to domain-based prediction to user's trustworthiness of Online Social Networks (OSNs). Through incorporating three machine learning algorithms, the experimental results verify the applicability of the proposed approach to classify and predict domain-based trustworthy users of OSNs.

Zhuang, Wei, Zeng, Qingfeng.  2018.  A Trust-Based Framework for Internet Word of Mouth Effect in B2C Environment. Proceedings of the 2Nd International Conference on Computer Science and Application Engineering. :151:1-151:5.

As a valuable source of information, Word Of Mouth1 has always been valued by consumers and business marketers. The Internet provides a new medium for Word Of Mouth communication. Consumers share their views and comments on products, services, brands and enterprises through online platforms, thus forming Internet Word Of Mouth, which will be of great importance to B2C enterprises. However, disturbing and even false information as well as uncertainties and risks existing in the online communication environment lead to the crisis of online trust. Accordingly, this study constructs a trust mechanism model of Internet Word Of Mouth effect, which shows that the professionalism of communicators, online relationship strength, communication channels, and product involvement are key factors significantly affecting the Word Of Mouth effect. This model can provide theoretical guidance in the word-of-mouth marketing and the operation of B2C e-commerce enterprises.