Biblio
In the era of the ever-growing number of smart devices, fraudulent practices through Phishing Websites have become an increasingly severe threat to modern computers and internet security. These websites are designed to steal the personal information from the user and spread over the internet without the knowledge of the user using the system. These websites give a false impression of genuinity to the user by mirroring the real trusted web pages which then leads to the loss of important credentials of the user. So, Detection of such fraudulent websites is an essence and the need of the hour. In this paper, various classifiers have been considered and were found that ensemble classifiers predict to utmost efficiency. The idea behind was whether a combined classifier model performs better than a single classifier model leading to a better efficiency and accuracy. In this paper, for experimentation, three Meta Classifiers, namely, AdaBoostM1, Stacking, and Bagging have been taken into consideration for performance comparison. It is found that Meta Classifier built by combining of simple classifier(s) outperform the simple classifier's performance.
Due to practical constraints in preventing phishing through public network or insecure communication channels, simple physical unclonable function (PDF)-based authentication protocol with unrestricted queries and transparent responses is vulnerable to modeling and replay attacks. In this paper, we present a PUF-based authentication method to mitigate the practical limitations in applications where a resource-rich server authenticates a device with no strong restriction imposed on the type of PUF designs or any additional protection on the binary channel used for the authentication. Our scheme uses an active deception protocol to prevent machine learning (ML) attacks on a device. The monolithic system makes collection of challenge response pairs (CRPs) easy for model building during enrollment but prohibitively time consuming upon device deployment. A genuine server can perform a mutual authentication with the device at any time with a combined fresh challenge contributed by both the server and the device. The message exchanged in clear does not expose the authentic CRPs. The false PUF multiplexing is fortified against prediction of waiting time by doubling the time penalty for every unsuccessful authentication.
Deep learning is a popular powerful machine learning solution to the computer vision tasks. The most criticized vulnerability of deep learning is its poor tolerance towards adversarial images obtained by deliberately adding imperceptibly small perturbations to the clean inputs. Such negatives can delude a classifier into wrong decision making. Previous defensive techniques mostly focused on refining the models or input transformation. They are either implemented only with small datasets or shown to have limited success. Furthermore, they are rarely scrutinized from the hardware perspective despite Artificial Intelligence (AI) on a chip is a roadmap for embedded intelligence everywhere. In this paper we propose a new discriminative noise injection strategy to adaptively select a few dominant layers and progressively discriminate adversarial from benign inputs. This is made possible by evaluating the differences in label change rate from both adversarial and natural images by injecting different amount of noise into the weights of individual layers in the model. The approach is evaluated on the ImageNet Dataset with 8-bit truncated models for the state-of-the-art DNN architectures. The results show a high detection rate of up to 88.00% with only approximately 5% of false positive rate for MobileNet. Both detection rate and false positive rate have been improved well above existing advanced defenses against the most practical noninvasive universal perturbation attack on deep learning based AI chip.
The real-time map updating enables vehicles to obtain accurate and timely traffic information. Especially for driverless cars, real-time map updating can provide high-precision map service to assist the navigation, which requires vehicles to actively upload the latest road conditions. However, due to the untrusted network environment, it is difficult for the real-time map updating server to evaluate the authenticity of the road information from the vehicles. In order to prevent malicious vehicles from deliberately spreading false information and protect the privacy of vehicles from tracking attacks, this paper proposes a trust-based real-time map updating scheme. In this scheme, the public key is used as the identifier of the vehicle for anonymous communication with conditional anonymity. In addition, the blockchain is applied to provide the existence proof for the public key certificate of the vehicle. At the same time, to avoid the spread of false messages, a trust evaluation algorithm is designed. The fog node can validate the received massages from vehicles using Bayesian Inference Model. Based on the verification results, the road condition information is sent to the real-time map updating server so that the server can update the map in time and prevent the secondary traffic accident. In order to calculate the trust value offset for the vehicle, the fog node generates a rating for each message source vehicle, and finally adds the relevant data to the blockchain. According to the result of security analysis, this scheme can guarantee the anonymity and prevent the Sybil attack. Simulation results show that the proposed scheme is effective and accurate in terms of real-time map updating and trust values calculating.
Distributed consensus is a prototypical distributed optimization and decision making problem in social, economic and engineering networked systems. In collaborative applications investigating the effects of adversaries is a critical problem. In this paper we investigate distributed consensus problems in the presence of adversaries. We combine key ideas from distributed consensus in computer science on one hand and in control systems on the other. The main idea is to detect Byzantine adversaries in a network of collaborating agents who have as goal reaching consensus, and exclude them from the consensus process and dynamics. We describe a novel trust-aware consensus algorithm that integrates the trust evaluation mechanism into the distributed consensus algorithm and propose various local decision rules based on local evidence. To further enhance the robustness of trust evaluation itself, we also introduce a trust propagation scheme in order to take into account evidences of other nodes in the network. The resulting algorithm is flexible and extensible, and can incorporate more complex designs of decision rules and trust models. To demonstrate the power of our trust-aware algorithm, we provide new theoretical security performance results in terms of miss detection and false alarm rates for regular and general trust graphs. We demonstrate through simulations that the new trust-aware consensus algorithm can effectively detect Byzantine adversaries and can exclude them from consensus iterations even in sparse networks with connectivity less than 2f+1, where f is the number of adversaries.
Modern large scale technical systems often face iterative changes on their behaviours with the requirement of validated quality which is not easy to achieve completely with traditional testing. Regression verification is a powerful tool for the formal correctness analysis of software-driven systems. By proving that a new revision of the software behaves similarly as the original version of the software, some of the trust that the old software and system had earned during the validation processes or operation histories can be inherited to the new revision. This trust inheritance by the formal analysis relies on a number of implicit assumptions which are not self-evident but easy to miss, and may lead to a false sense of safety induced by a misunderstood regression verification processes. This paper aims at pointing out hidden, implicit assumptions of regression verification in the context of cyber-physical systems by making them explicit using practical examples. The explicit trust inheritance analysis would clarify for the engineers to understand the extent of the trust that regression verification provides and consequently facilitate them to utilize this formal technique for the system validation.
Security challenges present in Machine-to-Machine Communication (M2M-C) and big data paradigm are fundamentally different from conventional network security challenges. In M2M-C paradigms, “Trust” is a vital constituent of security solutions that address security threats and for such solutions,it is important to quantify and evaluate the amount of trust in the information and its source. In this work, we focus on Machine Learning (ML) Based Trust (MLBT) evaluation model for detecting malicious activities in a vehicular Based M2M-C (VBM2M-C) network. In particular, we present an Entropy Based Feature Engineering (EBFE) coupled Extreme Gradient Boosting (XGBoost) model which is optimized with Binary Particle Swarm optimization technique. Based on three performance metrics, i.e., Accuracy Rate (AR), True Positive Rate (TPR), False Positive Rate (FPR), the effectiveness of the proposed method is evaluated in comparison to the state-of-the-art ensemble models, such as XGBoost and Random Forest. The simulation results demonstrates the superiority of the proposed model with approximately 10% improvement in accuracy, TPR and FPR, with reference to the attacker density of 30% compared with the start-of-the-art algorithms.
This work takes a novel approach to classifying the behavior of devices by exploiting the single-purpose nature of IoT devices and analyzing the complexity and variance of their network traffic. We develop a formalized measurement of complexity for IoT devices, and use this measurement to precisely tune an anomaly detection algorithm for each device. We postulate that IoT devices with low complexity lead to a high confidence in their behavioral model and have a correspondingly more precise decision boundary on their predicted behavior. Conversely, complex general purpose devices have lower confidence and a more generalized decision boundary. We show that there is a positive correlation to our complexity measure and the number of outliers found by an anomaly detection algorithm. By tuning this decision boundary based on device complexity we are able to build a behavioral framework for each device that reduces false positive outliers. Finally, we propose an architecture that can use this tuned behavioral model to rank each flow on the network and calculate a trust score ranking of all traffic to and from a device which allows the network to autonomously make access control decisions on a per-flow basis.
We consider distributed Kalman filter for dynamic state estimation over wireless sensor networks. It is promising but challenging when network is under cyber attacks. Since the information exchange between nodes, the malicious attacks quickly spread across the entire network, which causing large measurement errors and even to the collapse of sensor networks. Aiming at the malicious network attack, a trust-based distributed processing frame is proposed. Which allows neighbor nodes to exchange information, and a series of trusted nodes are found using truth discovery. As a demonstration, distributed Cooperative Localization is considered, and numerical results are provided to evaluate the performance of the proposed approach by considering random, false data injection and replay attacks.
Vehicular Ad-hoc Networks (VANETs) play an essential role in ensuring safe, reliable and faster transportation with the help of an Intelligent Transportation system. The trustworthiness of vehicles in VANETs is extremely important to ensure the authenticity of messages and traffic information transmitted in extremely dynamic topographical conditions where vehicles move at high speed. False or misleading information may cause substantial traffic congestions, road accidents and may even cost lives. Many approaches exist in literature to measure the trustworthiness of GPS data and messages of an Autonomous Vehicle (AV). To the best of our knowledge, they have not considered the trustworthiness of other On-Board Unit (OBU) components of an AV, along with GPS data and transmitted messages, though they have a substantial relevance in overall vehicle trust measurement. In this paper, we introduce a novel model to measure the overall trustworthiness of an AV considering four different OBU components additionally. The performance of the proposed method is evaluated with a traffic simulation model developed by Simulation of Urban Mobility (SUMO) using realistic traffic data and considering different levels of uncertainty.
Many consumers now rely on different forms of voice assistants, both stand-alone devices and those built into smartphones. Currently, these systems react to specific wake-words, such as "Alexa," "Siri," or "Ok Google." However, with advancements in natural language processing, the next generation of voice assistants could instead always listen to the acoustic environment and proactively provide services and recommendations based on conversations without being explicitly invoked. We refer to such devices as "always listening voice assistants" and explore expectations around their potential use. In this paper, we report on a 178-participant survey investigating the potential services people anticipate from such a device and how they feel about sharing their data for these purposes. Our findings reveal that participants can anticipate a wide range of services pertaining to a conversation; however, most of the services are very similar to those that existing voice assistants currently provide with explicit commands. Participants are more likely to consent to share a conversation when they do not find it sensitive, they are comfortable with the service and find it beneficial, and when they already own a stand-alone voice assistant. Based on our findings we discuss the privacy challenges in designing an always-listening voice assistant.
Despite corporate cyber intrusions attracting all the attention, privacy breaches that we, as ordinary users, should be worried about occur every day without any scrutiny. Smartphones, a household item, have inadvertently become a major enabler of privacy breaches. Smartphone platforms use permission systems to regulate access to sensitive resources. These permission systems, however, lack the ability to understand users’ privacy expectations leaving a significant gap between how permission models behave and how users would want the platform to protect their sensitive data. This dissertation provides an in-depth analysis of how users make privacy decisions in the context of Smartphones and how platforms can accommodate user’s privacy requirements systematically. We first performed a 36-person field study to quantify how often applications access protected resources when users are not expecting it. We found that when the application requesting the permission is running invisibly to the user, they are more likely to deny applications access to protected resources. At least 80% of our participants would have preferred to prevent at least one permission request. To explore the feasibility of predicting user’s privacy decisions based on their past decisions, we performed a longitudinal 131-person field study. Based on the data, we built a classifier to make privacy decisions on the user’s behalf by detecting when the context has changed and inferring privacy preferences based on the user’s past decisions. We showed that our approach can accurately predict users’ privacy decisions 96.8% of the time, which is an 80% reduction in error rate compared to current systems. Based on these findings, we developed a custom Android version with a contextually aware permission model. The new model guards resources based on user’s past decisions under similar contextual circumstances. We performed a 38-person field study to measure the efficiency and usability of the new permission model. Based on exit interviews and 5M data points, we found that the new system is effective in reducing the potential violations by 75%. Despite being significantly more restrictive over the default permission systems, participants did not find the new model to cause any usability issues in terms of application functionality.
We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps’ compliance with the Children’s Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children’s apps, we found that a majority are potentially in violation of COPPA, mainly due to their use of thirdparty SDKs. While many of these SDKs offer configuration options to respect COPPA by disabling tracking and behavioral advertising, our data suggest that a majority of apps either do not make use of these options or incorrectly propagate them across mediation SDKs. Worse, we observed that 19% of children’s apps collect identifiers or other personally identifiable information (PII) via SDKs whose terms of service outright prohibit their use in child-directed apps. Finally, we show that efforts by Google to limit tracking through the use of a resettable advertising ID have had little success: of the 3,454 apps that share the resettable ID with advertisers, 66% transmit other, non-resettable, persistent identifiers as well, negating any intended privacy-preserving properties of the advertising ID.