Biblio
The main intention of edge computing is to improve network performance by storing and computing data at the edge of the network near the end user. However, its rapid development largely ignores security threats in large-scale computing platforms and their capable applications. Therefore, Security and privacy are crucial need for edge computing and edge computing based environment. Security vulnerabilities in edge computing systems lead to security threats affecting edge computing networks. Therefore, there is a basic need for an intrusion detection system (IDS) designed for edge computing to mitigate security attacks. Due to recent attacks, traditional algorithms may not be possibility for edge computing. This article outlines the latest IDS designed for edge computing and focuses on the corresponding methods, functions and mechanisms. This review also provides deep understanding of emerging security attacks in edge computing. This article proves that although the design and implementation of edge computing IDS have been studied previously, the development of efficient, reliable and powerful IDS for edge computing systems is still a crucial task. At the end of the review, the IDS developed will be introduced as a future prospect.
Supervisory control and data acquisition (SCADA) networks provide high situational awareness and automation control for industrial control systems, whilst introducing a wide range of access points for cyber attackers. To address these issues, a line of machine learning or deep learning based intrusion detection systems (IDSs) have been presented in the literature, where a large number of attack examples are usually demanded. However, in real-world SCADA networks, attack examples are not always sufficient, having only a few shots in many cases. In this paper, we propose a novel few-shot learning based IDS, named FS-IDS, to detect cyber attacks against SCADA networks, especially when having only a few attack examples in the defenders’ hands. Specifically, a new method by orchestrating one-hot encoding and principal component analysis is developed, to preprocess SCADA datasets containing sufficient examples for frequent cyber attacks. Then, a few-shot learning based preliminary IDS model is designed and trained using the preprocessed data. Last, a complete FS-IDS model for SCADA networks is established by further training the preliminary IDS model with a few examples for cyber attacks of interest. The high effectiveness of the proposed FS-IDS, in detecting cyber attacks against SCADA networks with only a few examples, is demonstrated by extensive experiments on a real SCADA dataset.
Information security is a process of securing data from security breaches, hackers. The program of intrusion detection is a software framework that keeps tracking and analyzing the data in the network to identify the attacks by using traditional techniques. These traditional intrusion techniques work very efficient when it uses on small data. but when the same techniques used for big data, process of analyzing the data properties take long time and become not efficient and need to use the big data technologies like Apache Spark, Hadoop, Flink etc. to design modern Intrusion Detection System (IDS). In this paper, the design of Apache Spark and classification algorithm-based IDS is presented and employed Chi-square as a feature selection method for selecting the features from network security events data. The performance of Logistic Regression, Decision Tree and SVM is evaluated with SGD in the design of Apache Spark based IDS with AUROC and AUPR used as metrics. Also tabulated the training and testing time of each algorithm and employed NSL-KDD dataset for designing all our experiments.
Intrusion detection is one of the most prominent and challenging problem faced by cybersecurity organizations. Intrusion Detection System (IDS) plays a vital role in identifying network security threats. It protects the network for vulnerable source code, viruses, worms and unauthorized intruders for many intranet/internet applications. Despite many open source APIs and tools for intrusion detection, there are still many network security problems exist. These problems are handled through the proper pre-processing, normalization, feature selection and ranking on benchmark dataset attributes prior to the enforcement of self-learning-based classification algorithms. In this paper, we have performed a comprehensive comparative analysis of the benchmark datasets NSL-KDD and CIDDS-001. For getting optimal results, we have used the hybrid feature selection and ranking methods before applying self-learning (Machine / Deep Learning) classification algorithmic approaches such as SVM, Naïve Bayes, k-NN, Neural Networks, DNN and DAE. We have analyzed the performance of IDS through some prominent performance indicator metrics such as Accuracy, Precision, Recall and F1-Score. The experimental results show that k-NN, SVM, NN and DNN classifiers perform approx. 100% accuracy regarding performance evaluation metrics on the NSL-KDD dataset whereas k-NN and Naïve Bayes classifiers perform approx. 99% accuracy on the CIDDS-001 dataset.
Denial of service (DoS) is a process of injecting malicious packets into the network. Intrusion detection system (IDS) is a system used to investigate malicious packets in the network. Software-defined network (SDN) physically separates control plane and data plane. The control plane is moved to a centralized controller, and it makes a decision in the network from a global view. The combination between IDS and SDN allows the prevention of malicious packets to be more efficient due to the advantage of the global view in SDN. IDS needs to communicate with switches to have an access to all end-to-end traffic in the network. The high traffic in the link between switches and IDS results in congestion. The congestion between switches and IDS delays the detection and prevention of malicious traffic. To address this problem, we propose a historical database (Hdb), a scheme to reduce the traffic between switches and IDS, based on the historical information of a sender. The simulation shows that in the average, 54.1% of traffic mirrored to IDS is reduced compared to the conventional schemes.
The zero-day attack in networks exploits an undiscovered vulnerability, in order to affect/damage networks or programs. The term “zero-day” refers to the number of days available to the software or the hardware vendor to issue a patch for this new vulnerability. Currently, the best-known defense mechanism against the zero-day attacks focuses on detection and response, as a prevention effort, which typically fails against unknown or new vulnerabilities. To the best of our knowledge, this attack has not been widely investigated for Software-Defined Networks (SDNs). Therefore, in this work we are motivated to develop anew zero-day attack detection and prevention mechanism, which is designed and implemented for SDN using a modified sandbox tool, named Cuckoo. Our experiments results, under UNIX system, show that our proposed design successfully stops zero-day malwares by isolating the infected client, and thus, prevents these malwares from infesting other clients.
Internet of things (IoT) is the smart network which connects smart objects over the Internet. The Internet is untrusted and unreliable network and thus IoT network is vulnerable to different kind of attacks. Conventional encryption and authentication techniques sometimes fail on IoT based network and intrusion may succeed to destroy the network. So, it is necessary to design intrusion detection system for such network. In our paper, we detect routing attacks such as sinkhole and selective forwarding. We have also tried to prevent our network from these attacks. We designed detection and prevention algorithm, i.e., KMA (Key Match Algorithm) and CBA (Cluster- Based Algorithm) in MatLab simulation environment. We gave two intrusion detection mechanisms and compared their results as well. True positive intrusion detection rate for our work is between 50% to 80% with KMA and 76% to 96% with CBA algorithm.
With the rapid application of the network based communication in industries, the security related problems appear to be inevitable for automation networks. The integration of internet into the automation plant benefited companies and engineers a lot and on the other side paved ways to number of threats. An attack on such control critical infrastructure may endangers people's health and safety, damage industrial facilities and produce financial loss. One of the approach to secure the network in automation is the development of an efficient Network based Intrusion Detection System (NIDS). Despite several techniques available for intrusion detection, they still lag in identifying the possible attacks or novel attacks on network efficiently. In this paper, we evaluate the performance of detection mechanism by combining the deep learning techniques with the machine learning techniques for the development of Intrusion Detection System (IDS). The performance metrics such as precession, recall and F-Measure were measured.
Intrusion detection systems need to be both accurate and fast. Speed is important especially when operating at the network level. Additionally, many intrusion detection systems rely on signature based detection approaches. However, machine learning can also be helpful for intrusion detection. One key challenge when using machine learning, aside from the detection accuracy, is using machine learning algorithms that are fast. In this paper, several processing architectures are considered for use in machine learning based intrusion detection systems. These architectures include standard CPUs, GPUs, and cognitive processors. Results of their processing speeds are compared and discussed.
In recent years, we have seen a number of successful attacks against high-profile targets, some of which have even caused severe physical damage. These examples have shown us that resourceful and determined attackers can penetrate virtually any system, even those that are secured by the "air-gap." Consequently, in order to minimize the impact of stealthy attacks, defenders have to focus not only on strengthening the first lines of defense but also on deploying effective intrusion-detection systems. Intrusion-detection systems can play a key role in protecting sensitive computer systems since they give defenders a chance to detect and mitigate attacks before they could cause substantial losses. However, an over-sensitive intrusion-detection system, which produces a large number of false alarms, imposes prohibitively high operational costs on a defender since alarms need to be manually investigated. Thus, defenders have to strike the right balance between maximizing security and minimizing costs. Optimizing the sensitivity of intrusion detection systems is especially challenging in the case when multiple inter-dependent computer systems have to be defended against a strategic attacker, who can target computer systems in order to maximize losses and minimize the probability of detection. We model this scenario as an attacker-defender security game and study the problem of finding optimal intrusion detection thresholds.
With the developing of Internet, network intrusion has become more and more common. Quickly identifying and preventing network attacks is getting increasingly more important and difficult. Machine learning techniques have already proven to be robust methods in detecting malicious activities and network threats. Ensemble-based and semi-supervised learning methods are some of the areas that receive most attention in machine learning today. However relatively little attention has been given in combining these methods. To overcome such limitations, this paper proposes a novel network anomaly detection method by using a combination of a tri-training approach with Adaboost algorithms. The bootstrap samples of tri-training are replaced by three different Adaboost algorithms to create the diversity. We run 30 iteration for every simulation to obtain the average results. Simulations indicate that our proposed semi-supervised Adaboost algorithm is reproducible and consistent over a different number of runs. It outperforms other state-of-the-art learning algorithms, even with a small part of labeled data in the training phase. Specifically, it has a very short execution time and a good balance between the detection rate as well as the false-alarm rate.
Traffic of Industrial Control System (ICS) between the Human Machine Interface (HMI) and the Programmable Logic Controller (PLC) is highly periodic. However, it is sometimes multiplexed, due to multi-threaded scheduling. In previous work we introduced a Statechart model which includes multiple Deterministic Finite Automata (DFA), one per cyclic pattern. We demonstrated that Statechart-based anomaly detection is highly effective on multiplexed cyclic traffic when the individual cyclic patterns are known. The challenge is to construct the Statechart, by unsupervised learning, from a captured trace of the multiplexed traffic, especially when the same symbols (ICS messages) can appear in multiple cycles, or multiple times in a cycle. Previously we suggested a combinatorial approach for the Statechart construction, based on Euler cycles in the Discrete Time Markov Chain (DTMC) graph of the trace. This combinatorial approach worked well in simple scenarios, but produced a false-alarm rate that was excessive on more complex multiplexed traffic. In this paper we suggest a new Statechart construction method, based on spectral analysis. We use the Fourier transform to identify the dominant periods in the trace. Our algorithm then associates a set of symbols with each dominant period, identifies the order of the symbols within each period, and creates the cyclic DFAs and the Statechart. We evaluated our solution on long traces from two production ICS: one using the Siemens S7-0x72 protocol and the other using Modbus. We also stress-tested our algorithms on a collection of synthetically-generated traces that simulate multiplexed ICS traces with varying levels of symbol uniqueness and time overlap. The resulting Statecharts model the traces with an overall median false-alarm rate as low as 0.16% on the synthetic datasets, and with zero false-alarms on production S7-0x72 traffic. Moreover, the spectral analysis Statecharts consistently out-performed the previous combinatorial Statecharts, exhibiting significantly lower false alarm rates and more compact model sizes.
A major challenge of the existing attack detection approaches is the identification of relevant information to a particular situation, and the use of such information to perform multi-evidence intrusion detection. Addressing such a limitation requires integrating several aspects of context to better predict, avoid and respond to impending attacks. The quality and adequacy of contextual information is important to decrease uncertainty and correctly identify potential cyber-attacks. In this paper, a systematic methodology has been used to identify contextual dimensions that improve the effectiveness of detecting cyber-attacks. This methodology combines graph, probability, and information theories to create several context-based attack prediction models that analyze data at a high- and low-level. An extensive validation of our approach has been performed using a prototype system and several benchmark intrusion detection datasets yielding very promising results.