Biblio
This paper investigates the impact of authentication on effective capacity (EC) of an underwater acoustic (UWA) channel. Specifically, the UWA channel is under impersonation attack by a malicious node (Eve) present in the close vicinity of the legitimate node pair (Alice and Bob); Eve tries to inject its malicious data into the system by making Bob believe that she is indeed Alice. To thwart the impersonation attack by Eve, Bob utilizes the distance of the transmit node as the feature/fingerprint to carry out feature-based authentication at the physical layer. Due to authentication at Bob, due to lack of channel knowledge at the transmit node (Alice or Eve), and due to the threshold-based decoding error model, the relevant dynamics of the considered system could be modelled by a Markov chain (MC). Thus, we compute the state-transition probabilities of the MC, and the moment generating function for the service process corresponding to each state. This enables us to derive a closed-form expression of the EC in terms of authentication parameters. Furthermore, we compute the optimal transmission rate (at Alice) through gradient-descent (GD) technique and artificial neural network (ANN) method. Simulation results show that the EC decreases under severe authentication constraints (i.e., more false alarms and more transmissions by Eve). Simulation results also reveal that the (optimal transmission rate) performance of the ANN technique is quite close to that of the GTJ method.
The Machine Type Communication Devices (MTCDs) are usually based on Internet Protocol (IP), which can cause billions of connected objects to be part of the Internet. The enormous amount of data coming from these devices are quite heterogeneous in nature, which can lead to security issues, such as injection attacks, ballot stuffing, and bad mouthing. Consequently, this work considers machine learning trust evaluation as an effective and accurate option for solving the issues associate with security threats. In this paper, a comparative analysis is carried out with five different machine learning approaches: Naive Bayes (NB), Decision Tree (DT), Linear and Radial Support Vector Machine (SVM), KNearest Neighbor (KNN), and Random Forest (RF). As a critical element of the research, the recommendations consider different Machine-to-Machine (M2M) communication nodes with regard to their ability to identify malicious and honest information. To validate the performances of these models, two trust computation measures were used: Receiver Operating Characteristics (ROCs), Precision and Recall. The malicious data was formulated in Matlab. A scenario was created where 50% of the information were modified to be malicious. The malicious nodes were varied in the ranges of 10%, 20%, 30%, 40%, and the results were carefully analyzed.
The importance of peer-to-peer (P2P) network overlays produced enormous interest in the research community due to their robustness, scalability, and increase of data availability. P2P networks are overlays of logically connected hosts and other nodes including servers. P2P networks allow users to share their files without the need for any centralized servers. Since P2P networks are largely constructed of end-hosts, they are susceptible to abuse and malicious activity, such as sybil attacks. Impostors perform sybil attacks by assigning nodes multiple addresses, as opposed to a single address, with the goal of degrading network quality. Sybil nodes will spread malicious data and provide bogus responses to requests. To prevent sybil attacks from occurring, a novel defense mechanism is proposed. In the proposed scheme, the DHT key-space is divided and treated in a similar manner to radio frequency allocation incensing. An overlay of trusted nodes is used to detect and handle sybil nodes with the aid of source-destination pairs reporting on each other. The simulation results show that the proposed scheme detects sybil nodes in large sized networks with thousands of interactions.
Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.