Biblio

Found 1065 results

Filters: Keyword is machine learning  [Clear All Filters]
2020-04-03
Fawaz, Kassem, Linden, Thomas, Harkous, Hamza.  2019.  Invited Paper: The Applications of Machine Learning in Privacy Notice and Choice. 2019 11th International Conference on Communication Systems Networks (COMSNETS). :118—124.
For more than two decades since the rise of the World Wide Web, the “Notice and Choice” framework has been the governing practice for the disclosure of online privacy practices. The emergence of new forms of user interactions, such as voice, and the enforcement of new regulations, such as the EU's recent General Data Protection Regulation (GDPR), promise to change this privacy landscape drastically. This paper discusses the challenges towards providing the privacy stakeholders with privacy awareness and control in this changing landscape. We will also present our recent research on utilizing Machine learning to analyze privacy policies and settings.
2020-01-21
Singh, Malvika, Mehtre, B.M., Sangeetha, S..  2019.  User Behavior Profiling Using Ensemble Approach for Insider Threat Detection. 2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA). :1–8.

The greatest threat towards securing the organization and its assets are no longer the attackers attacking beyond the network walls of the organization but the insiders present within the organization with malicious intent. Existing approaches helps to monitor, detect and prevent any malicious activities within an organization's network while ignoring the human behavior impact on security. In this paper we have focused on user behavior profiling approach to monitor and analyze user behavior action sequence to detect insider threats. We present an ensemble hybrid machine learning approach using Multi State Long Short Term Memory (MSLSTM) and Convolution Neural Networks (CNN) based time series anomaly detection to detect the additive outliers in the behavior patterns based on their spatial-temporal behavior features. We find that using Multistate LSTM is better than basic single state LSTM. The proposed method with Multistate LSTM can successfully detect the insider threats providing the AUC of 0.9042 on train data and AUC of 0.9047 on test data when trained with publically available dataset for insider threats.

2020-04-03
Alom, Md. Zulfikar, Carminati, Barbara, Ferrari, Elena.  2019.  Adapting Users' Privacy Preferences in Smart Environments. 2019 IEEE International Congress on Internet of Things (ICIOT). :165—172.
A smart environment is a physical space where devices are connected to provide continuous support to individuals and make their life more comfortable. For this purpose, a smart environment collects, stores, and processes a massive amount of personal data. In general, service providers collect these data according to their privacy policies. To enhance the privacy control, individuals can explicitly express their privacy preferences, stating conditions on how their data have to be used and managed. Typically, privacy checking is handled through the hard matching of users' privacy preferences against service providers' privacy policies, by denying all service requests whose privacy policies do not fully match with individual's privacy preferences. However, this hard matching might be too restrictive in a smart environment because it denies the services that partially satisfy the individual's privacy preferences. To cope with this challenge, in this paper, we propose a soft privacy matching mechanism, able to relax, in a controlled way, some conditions of users' privacy preferences such to match with service providers' privacy policies. At this aim, we exploit machine learning algorithms to build a classifier, which is able to make decisions on future service requests, by learning which privacy preference components a user is prone to relax, as well as the relaxation tolerance. We test our approach on two realistic datasets, obtaining promising results.
2020-08-07
Zhu, Tianqing, Yu, Philip S..  2019.  Applying Differential Privacy Mechanism in Artificial Intelligence. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1601—1609.
Artificial Intelligence (AI) has attracted a large amount of attention in recent years. However, several new problems, such as privacy violations, security issues, or effectiveness, have been emerging. Differential privacy has several attractive properties that make it quite valuable for AI, such as privacy preservation, security, randomization, composition, and stability. Therefore, this paper presents differential privacy mechanisms for multi-agent systems, reinforcement learning, and knowledge transfer based on those properties, which proves that current AI can benefit from differential privacy mechanisms. In addition, the previous usage of differential privacy mechanisms in private machine learning, distributed machine learning, and fairness in models is discussed, bringing several possible avenues to use differential privacy mechanisms in AI. The purpose of this paper is to deliver the initial idea of how to integrate AI with differential privacy mechanisms and to explore more possibilities to improve AIs performance.
2020-03-30
Scherzinger, Stefanie, Seifert, Christin, Wiese, Lena.  2019.  The Best of Both Worlds: Challenges in Linking Provenance and Explainability in Distributed Machine Learning. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). :1620–1629.
Machine learning experts prefer to think of their input as a single, homogeneous, and consistent data set. However, when analyzing large volumes of data, the entire data set may not be manageable on a single server, but must be stored on a distributed file system instead. Moreover, with the pressing demand to deliver explainable models, the experts may no longer focus on the machine learning algorithms in isolation, but must take into account the distributed nature of the data stored, as well as the impact of any data pre-processing steps upstream in their data analysis pipeline. In this paper, we make the point that even basic transformations during data preparation can impact the model learned, and that this is exacerbated in a distributed setting. We then sketch our vision of end-to-end explainability of the model learned, taking the pre-processing into account. In particular, we point out the potentials of linking the contributions of research on data provenance with the efforts on explainability in machine learning. In doing so, we highlight pitfalls we may experience in a distributed system on the way to generating more holistic explanations for our machine learning models.
2020-02-26
Rahman, Obaid, Quraishi, Mohammad Ali Gauhar, Lung, Chung-Horng.  2019.  DDoS Attacks Detection and Mitigation in SDN Using Machine Learning. 2019 IEEE World Congress on Services (SERVICES). 2642-939X:184–189.

Software Defined Networking (SDN) is very popular due to the benefits it provides such as scalability, flexibility, monitoring, and ease of innovation. However, it needs to be properly protected from security threats. One major attack that plagues the SDN network is the distributed denial-of-service (DDoS) attack. There are several approaches to prevent the DDoS attack in an SDN network. We have evaluated a few machine learning techniques, i.e., J48, Random Forest (RF), Support Vector Machine (SVM), and K-Nearest Neighbors (K-NN), to detect and block the DDoS attack in an SDN network. The evaluation process involved training and selecting the best model for the proposed network and applying it in a mitigation and prevention script to detect and mitigate attacks. The results showed that J48 performs better than the other evaluated algorithms, especially in terms of training and testing time.

2020-09-04
Laguduva, Vishalini, Islam, Sheikh Ariful, Aakur, Sathyanarayanan, Katkoori, Srinivas, Karam, Robert.  2019.  Machine Learning Based IoT Edge Node Security Attack and Countermeasures. 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). :670—675.
Advances in technology have enabled tremendous progress in the development of a highly connected ecosystem of ubiquitous computing devices collectively called the Internet of Things (IoT). Ensuring the security of IoT devices is a high priority due to the sensitive nature of the collected data. Physically Unclonable Functions (PUFs) have emerged as critical hardware primitive for ensuring the security of IoT nodes. Malicious modeling of PUF architectures has proven to be difficult due to the inherently stochastic nature of PUF architectures. Extant approaches to malicious PUF modeling assume that a priori knowledge and physical access to the PUF architecture is available for malicious attack on the IoT node. However, many IoT networks make the underlying assumption that the PUF architecture is sufficiently tamper-proof, both physically and mathematically. In this work, we show that knowledge of the underlying PUF structure is not necessary to clone a PUF. We present a novel non-invasive, architecture independent, machine learning attack for strong PUF designs with a cloning accuracy of 93.5% and improvements of up to 48.31% over an alternative, two-stage brute force attack model. We also propose a machine-learning based countermeasure, discriminator, which can distinguish cloned PUF devices and authentic PUFs with an average accuracy of 96.01%. The proposed discriminator can be used for rapidly authenticating millions of IoT nodes remotely from the cloud server.
2020-12-11
Ge, X., Pan, Y., Fan, Y., Fang, C..  2019.  AMDroid: Android Malware Detection Using Function Call Graphs. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :71—77.

With the rapid development of the mobile Internet, Android has been the most popular mobile operating system. Due to the open nature of Android, c countless malicious applications are hidden in a large number of benign applications, which pose great threats to users. Most previous malware detection approaches mainly rely on features such as permissions, API calls, and opcode sequences. However, these approaches fail to capture structural semantics of applications. In this paper, we propose AMDroid that leverages function call graphs (FCGs) representing the behaviors of applications and applies graph kernels to automatically learn the structural semantics of applications from FCGs. We evaluate AMDroid on the Genome Project, and the experimental results show that AMDroid is effective to detect Android malware with 97.49% detection accuracy.

2020-07-10
Koloveas, Paris, Chantzios, Thanasis, Tryfonopoulos, Christos, Skiadopoulos, Spiros.  2019.  A Crawler Architecture for Harvesting the Clear, Social, and Dark Web for IoT-Related Cyber-Threat Intelligence. 2019 IEEE World Congress on Services (SERVICES). 2642-939X:3—8.

The clear, social, and dark web have lately been identified as rich sources of valuable cyber-security information that -given the appropriate tools and methods-may be identified, crawled and subsequently leveraged to actionable cyber-threat intelligence. In this work, we focus on the information gathering task, and present a novel crawling architecture for transparently harvesting data from security websites in the clear web, security forums in the social web, and hacker forums/marketplaces in the dark web. The proposed architecture adopts a two-phase approach to data harvesting. Initially a machine learning-based crawler is used to direct the harvesting towards websites of interest, while in the second phase state-of-the-art statistical language modelling techniques are used to represent the harvested information in a latent low-dimensional feature space and rank it based on its potential relevance to the task at hand. The proposed architecture is realised using exclusively open-source tools, and a preliminary evaluation with crowdsourced results demonstrates its effectiveness.

Muñoz, Jordi Zayuelas i, Suárez-Varela, José, Barlet-Ros, Pere.  2019.  Detecting cryptocurrency miners with NetFlow/IPFIX network measurements. 2019 IEEE International Symposium on Measurements Networking (M N). :1—6.

In the last few years, cryptocurrency mining has become more and more important on the Internet activity and nowadays is even having a noticeable impact on the global economy. This has motivated the emergence of a new malicious activity called cryptojacking, which consists of compromising other machines connected to the Internet and leverage their resources to mine cryptocurrencies. In this context, it is of particular interest for network administrators to detect possible cryptocurrency miners using network resources without permission. Currently, it is possible to detect them using IP address lists from known mining pools, processing information from DNS traffic, or directly performing Deep Packet Inspection (DPI) over all the traffic. However, all these methods are still ineffective to detect miners using unknown mining servers or result too expensive to be deployed in real-world networks with large traffic volume. In this paper, we present a machine learning-based method able to detect cryptocurrency miners using NetFlow/IPFIX network measurements. Our method does not require to inspect the packets' payload; as a result, it achieves cost-efficient miner detection with similar accuracy than DPI-based techniques.

2020-01-28
Park, Sunnyeo, Kim, Dohyeok, Son, Sooel.  2019.  An Empirical Study of Prioritizing JavaScript Engine Crashes via Machine Learning. Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security. :646–657.

The early discovery of security bugs in JavaScript (JS) engines is crucial for protecting Internet users from adversaries abusing zero-day vulnerabilities. Browser vendors, bug bounty hunters, and security researchers have been eager to find such security bugs by leveraging state-of-the-art fuzzers as well as their domain expertise. They report a bug when observing a crash after executing their JS test since a crash is an early indicator of a potential bug. However, it is difficult to identify whether such a crash indeed invokes security bugs in JS engines. Thus, unskilled bug reporters are unable to assess the security severity of their new bugs with JS engine crashes. Today, this classification of a reported security bug is completely manual, depending on the verdicts from JS engine vendors. We investigated the feasibility of applying various machine learning classifiers to determine whether an observed crash triggers a security bug. We designed and implemented CRScope, which classifies security and non-security bugs from given crash-dump files. Our experimental results on 766 crash instances demonstrate that CRScope achieved 0.85, 0.89, and 0.93 Area Under Curve (AUC) for Chakra, V8, and SpiderMonkey crashes, respectively. CRScope also achieved 0.84, 0.89, and 0.95 precision for Chakra, V8, and SpiderMonkey crashes, respectively. This outperforms the previous study and existing tools including Exploitable and AddressSanitizer. CRScope is capable of learning domain-specific expertise from the past verdicts on reported bugs and automatically classifying JS engine security bugs, which helps improve the scalable classification of security bugs.

KADOGUCHI, Masashi, HAYASHI, Shota, HASHIMOTO, Masaki, OTSUKA, Akira.  2019.  Exploring the Dark Web for Cyber Threat Intelligence Using Machine Leaning. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :200–202.

In recent years, cyber attack techniques are increasingly sophisticated, and blocking the attack is more and more difficult, even if a kind of counter measure or another is taken. In order for a successful handling of this situation, it is crucial to have a prediction of cyber attacks, appropriate precautions, and effective utilization of cyber intelligence that enables these actions. Malicious hackers share various kinds of information through particular communities such as the dark web, indicating that a great deal of intelligence exists in cyberspace. This paper focuses on forums on the dark web and proposes an approach to extract forums which include important information or intelligence from huge amounts of forums and identify traits of each forum using methodologies such as machine learning, natural language processing and so on. This approach will allow us to grasp the emerging threats in cyberspace and take appropriate measures against malicious activities.

2020-12-02
Abeysekara, P., Dong, H., Qin, A. K..  2019.  Machine Learning-Driven Trust Prediction for MEC-Based IoT Services. 2019 IEEE International Conference on Web Services (ICWS). :188—192.

We propose a distributed machine-learning architecture to predict trustworthiness of sensor services in Mobile Edge Computing (MEC) based Internet of Things (IoT) services, which aligns well with the goals of MEC and requirements of modern IoT systems. The proposed machine-learning architecture models training a distributed trust prediction model over a topology of MEC-environments as a Network Lasso problem, which allows simultaneous clustering and optimization on large-scale networked-graphs. We then attempt to solve it using Alternate Direction Method of Multipliers (ADMM) in a way that makes it suitable for MEC-based IoT systems. We present analytical and simulation results to show the validity and efficiency of the proposed solution.

2020-08-28
Perry, Lior, Shapira, Bracha, Puzis, Rami.  2019.  NO-DOUBT: Attack Attribution Based On Threat Intelligence Reports. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :80—85.

The task of attack attribution, i.e., identifying the entity responsible for an attack, is complicated and usually requires the involvement of an experienced security expert. Prior attempts to automate attack attribution apply various machine learning techniques on features extracted from the malware's code and behavior in order to identify other similar malware whose authors are known. However, the same malware can be reused by multiple actors, and the actor who performed an attack using a malware might differ from the malware's author. Moreover, information collected during an incident may contain many clues about the identity of the attacker in addition to the malware used. In this paper, we propose a method of attack attribution based on textual analysis of threat intelligence reports, using state of the art algorithms and models from the fields of machine learning and natural language processing (NLP). We have developed a new text representation algorithm which captures the context of the words and requires minimal feature engineering. Our approach relies on vector space representation of incident reports derived from a small collection of labeled reports and a large corpus of general security literature. Both datasets have been made available to the research community. Experimental results show that the proposed representation can attribute attacks more accurately than the baselines' representations. In addition, we show how the proposed approach can be used to identify novel previously unseen threat actors and identify similarities between known threat actors.

2020-07-09
Nisha, D, Sivaraman, E, Honnavalli, Prasad B.  2019.  Predicting and Preventing Malware in Machine Learning Model. 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—7.

Machine learning is a major area in artificial intelligence, which enables computer to learn itself explicitly without programming. As machine learning is widely used in making decision automatically, attackers have strong intention to manipulate the prediction generated my machine learning model. In this paper we study about the different types of attacks and its countermeasures on machine learning model. By research we found that there are many security threats in various algorithms such as K-nearest-neighbors (KNN) classifier, random forest, AdaBoost, support vector machine (SVM), decision tree, we revisit existing security threads and check what are the possible countermeasures during the training and prediction phase of machine learning model. In machine learning model there are 2 types of attacks that is causative attack which occurs during the training phase and exploratory attack which occurs during the prediction phase, we will also discuss about the countermeasures on machine learning model, the countermeasures are data sanitization, algorithm robustness enhancement, and privacy preserving techniques.

2020-01-28
Kurniawan, Agus, Kyas, Marcel.  2019.  Securing Machine Learning Engines in IoT Applications with Attribute-Based Encryption. 2019 IEEE International Conference on Intelligence and Security Informatics (ISI). :30–34.

Machine learning has been adopted widely to perform prediction and classification. Implementing machine learning increases security risks when computation process involves sensitive data on training and testing computations. We present a proposed system to protect machine learning engines in IoT environment without modifying internal machine learning architecture. Our proposed system is designed for passwordless and eliminated the third-party in executing machine learning transactions. To evaluate our a proposed system, we conduct experimental with machine learning transactions on IoT board and measure computation time each transaction. The experimental results show that our proposed system can address security issues on machine learning computation with low time consumption.

2020-11-04
Khalid, F., Hanif, M. A., Rehman, S., Ahmed, R., Shafique, M..  2019.  TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :188—193.

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.

2020-01-28
Zizzo, Giulio, Hankin, Chris, Maffeis, Sergio, Jones, Kevin.  2019.  Adversarial Machine Learning Beyond the Image Domain. Proceedings of the 56th Annual Design Automation Conference 2019. :1–4.
Machine learning systems have had enormous success in a wide range of fields from computer vision, natural language processing, and anomaly detection. However, such systems are vulnerable to attackers who can cause deliberate misclassification by introducing small perturbations. With machine learning systems being proposed for cyber attack detection such attackers are cause for serious concern. Despite this the vast majority of adversarial machine learning security research is focused on the image domain. This work gives a brief overview of adversarial machine learning and machine learning used in cyber attack detection and suggests key differences between the traditional image domain of adversarial machine learning and the cyber domain. Finally we show an adversarial machine learning attack on an industrial control system.
2020-04-10
Bagui, Sikha, Nandi, Debarghya, Bagui, Subhash, White, Robert Jamie.  2019.  Classifying Phishing Email Using Machine Learning and Deep Learning. 2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security). :1—2.

In this work, we applied deep semantic analysis, and machine learning and deep learning techniques, to capture inherent characteristics of email text, and classify emails as phishing or non -phishing.

2020-01-27
Tuba, Eva, Jovanovic, Raka, Zivkovic, Dejan, Beko, Marko, Tuba, Milan.  2019.  Clustering Algorithm Optimized by Brain Storm Optimization for Digital Image Segmentation. 2019 7th International Symposium on Digital Forensics and Security (ISDFS). :1–6.
In the last several decades digital images were extend their usage in numerous areas. Due to various digital image processing methods they became part areas such as astronomy, agriculture and more. One of the main task in image processing application is segmentation. Since segmentation represents rather important problem, various methods were proposed in the past. One of the methods is to use clustering algorithms which is explored in this paper. We propose k-means algorithm for digital image segmentation. K-means algorithm's well known drawback is the high possibility of getting trapped into local optima. In this paper we proposed brain storm optimization algorithm for optimizing k-means algorithm used for digital image segmentation. Our proposed algorithm is tested on several benchmark images and the results are compared with other stat-of-the-art algorithms. The proposed method outperformed the existing methods.
2020-02-17
Liu, Haitian, Han, Weihong, jia, Yan.  2019.  Construction of Cyber Range Network Security Indication System Based on Deep Learning. 2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC). :495–502.
The main purpose of this paper is to solve the problem of quantitative and qualitative evaluation of network security. Referring to the relevant network security situation assessment algorithms, and by means of advanced artificial intelligence deep learning technology, to build a network security Indication System based on Cyber Range, and optimize the guidance model of deep learning technology.
2020-03-16
Babay, Amy, Schultz, John, Tantillo, Thomas, Beckley, Samuel, Jordan, Eamon, Ruddell, Kevin, Jordan, Kevin, Amir, Yair.  2019.  Deploying Intrusion-Tolerant SCADA for the Power Grid. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :328–335.

While there has been considerable research on making power grid Supervisory Control and Data Acquisition (SCADA) systems resilient to attacks, the problem of transitioning these technologies into deployed SCADA systems remains largely unaddressed. We describe our experience and lessons learned in deploying an intrusion-tolerant SCADA system in two realistic environments: a red team experiment in 2017 and a power plant test deployment in 2018. These experiences resulted in technical lessons related to developing an intrusion-tolerant system with a real deployable application, preparing a system for deployment in a hostile environment, and supporting protocol assumptions in that hostile environment. We also discuss some meta-lessons regarding the cultural aspects of transitioning academic research into practice in the power industry.

2020-02-17
Wang, Xinda, Sun, Kun, Batcheller, Archer, Jajodia, Sushil.  2019.  Detecting "0-Day" Vulnerability: An Empirical Study of Secret Security Patch in OSS. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :485–492.
Security patches in open source software (OSS) not only provide security fixes to identified vulnerabilities, but also make the vulnerable code public to the attackers. Therefore, armored attackers may misuse this information to launch N-day attacks on unpatched OSS versions. The best practice for preventing this type of N-day attacks is to keep upgrading the software to the latest version in no time. However, due to the concerns on reputation and easy software development management, software vendors may choose to secretly patch their vulnerabilities in a new version without reporting them to CVE or even providing any explicit description in their change logs. When those secretly patched vulnerabilities are being identified by armored attackers, they can be turned into powerful "0-day" attacks, which can be exploited to compromise not only unpatched version of the same software, but also similar types of OSS (e.g., SSL libraries) that may contain the same vulnerability due to code clone or similar design/implementation logic. Therefore, it is critical to identify secret security patches and downgrade the risk of those "0-day" attacks to at least "n-day" attacks. In this paper, we develop a defense system and implement a toolset to automatically identify secret security patches in open source software. To distinguish security patches from other patches, we first build a security patch database that contains more than 4700 security patches mapping to the records in CVE list. Next, we identify a set of features to help distinguish security patches from non-security ones using machine learning approaches. Finally, we use code clone identification mechanisms to discover similar patches or vulnerabilities in similar types of OSS. The experimental results show our approach can achieve good detection performance. A case study on OpenSSL, LibreSSL, and BoringSSL discovers 12 secret security patches.
2020-01-27
Álvarez Almeida, Luis Alfredo, Carlos Martinez Santos, Juan.  2019.  Evaluating Features Selection on NSL-KDD Data-Set to Train a Support Vector Machine-Based Intrusion Detection System. 2019 IEEE Colombian Conference on Applications in Computational Intelligence (ColCACI). :1–5.
The integrity of information and services is one of the more evident concerns in the world of global information security, due to the fact that it has economic repercussions on the digital industry. For this reason, big companies spend a lot of money on systems that protect them against cyber-attacks like Denial of Service attacks. In this article, we will use all the attributes of the data-set NSL-KDD to train and test a Support Vector Machine model. This model will then be applied to a method of feature selection to obtain the most relevant attributes within the aforementioned data-set and train the model again. The main goal is comparing the results obtained in both instances of training and validate which was more efficient.
2019-10-02
Hussein, A., Salman, O., Chehab, A., Elhajj, I., Kayssi, A..  2019.  Machine Learning for Network Resiliency and Consistency. 2019 Sixth International Conference on Software Defined Systems (SDS). :146–153.

Being able to describe a specific network as consistent is a large step towards resiliency. Next to the importance of security lies the necessity of consistency verification. Attackers are currently focusing on targeting small and crutial goals such as network configurations or flow tables. These types of attacks would defy the whole purpose of a security system when built on top of an inconsistent network. Advances in Artificial Intelligence (AI) are playing a key role in ensuring a fast responce to the large number of evolving threats. Software Defined Networking (SDN), being centralized by design, offers a global overview of the network. Robustness and adaptability are part of a package offered by programmable networking, which drove us to consider the integration between both AI and SDN. The general goal of our series is to achieve an Artificial Intelligence Resiliency System (ARS). The aim of this paper is to propose a new AI-based consistency verification system, which will be part of ARS in our future work. The comparison of different deep learning architectures shows that Convolutional Neural Networks (CNN) give the best results with an accuracy of 99.39% on our dataset and 96% on our consistency test scenario.