Visible to the public Biblio

Found 631 results

Filters: Keyword is Deep Learning  [Clear All Filters]
2022-02-09
Zhai, Tongqing, Li, Yiming, Zhang, Ziqi, Wu, Baoyuan, Jiang, Yong, Xia, Shu-Tao.  2021.  Backdoor Attack Against Speaker Verification. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2560–2564.
Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data (e.g., data from the Internet or third-party data company). This raises the question of whether adopting untrusted third-party data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data. Specifically, we design a clustering-based attack scheme where poisoned samples from different clusters will contain different triggers (i.e., pre-defined utterances), based on our understanding of verification tasks. The infected models behave normally on benign samples, while attacker-specified unenrolled triggers will successfully pass the verification even if the attacker has no information about the enrolled speaker. We also demonstrate that existing back-door attacks cannot be directly adopted in attacking speaker verification. Our approach not only provides a new perspective for designing novel attacks, but also serves as a strong baseline for improving the robustness of verification methods. The code for reproducing main results is available at https://github.com/zhaitongqing233/Backdoor-attack-against-speaker-verification.
2022-02-07
Khetarpal, Anavi, Mallik, Abhishek.  2021.  Visual Malware Classification Using Transfer Learning. 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–5.
The proliferation of malware attacks causes a hindrance to cybersecurity thus, posing a significant threat to our devices. The variety and number of both known as well as unknown malware makes it difficult to detect it. Research suggests that the ramifications of malware are only becoming worse with time and hence malware analysis becomes crucial. This paper proposes a visual malware classification technique to convert malware executables into their visual representations and obtain grayscale images of malicious files. These grayscale images are then used to classify malicious files into their respective malware families by passing them through deep convolutional neural networks (CNN). As part of deep CNN, we use various ImageNet models and compare their performance.
Wang, Shuwei, Wang, Qiuyun, Jiang, Zhengwei, Wang, Xuren, Jing, Rongqi.  2021.  A Weak Coupling of Semi-Supervised Learning with Generative Adversarial Networks for Malware Classification. 2020 25th International Conference on Pattern Recognition (ICPR). :3775–3782.
Malware classification helps to understand its purpose and is also an important part of attack detection. And it is also an important part of discovering attacks. Due to continuous innovation and development of artificial intelligence, it is a trend to combine deep learning with malware classification. In this paper, we propose an improved malware image rescaling algorithm (IMIR) based on local mean algorithm. Its main goal of IMIR is to reduce the loss of information from samples during the process of converting binary files to image files. Therefore, we construct a neural network structure based on VGG model, which is suitable for image classification. In the real world, a mass of malware family labels are inaccurate or lacking. To deal with this situation, we propose a novel method to train the deep neural network by Semi-supervised Generative Adversarial Network (SGAN), which only needs a small amount of malware that have accurate labels about families. By integrating SGAN with weak coupling, we can retain the weak links of supervised part and unsupervised part of SGAN. It improves the accuracy of malware classification by making classifiers more independent of discriminators. The results of experimental demonstrate that our model achieves exhibiting favorable performance. The recalls of each family in our data set are all higher than 93.75%.
Kumar, Shashank, Meena, Shivangi, Khosla, Savya, Parihar, Anil Singh.  2021.  AE-DCNN: Autoencoder Enhanced Deep Convolutional Neural Network For Malware Classification. 2021 International Conference on Intelligent Technologies (CONIT). :1–5.
Malware classification is a problem of great significance in the domain of information security. This is because the classification of malware into respective families helps in determining their intent, activity, and level of threat. In this paper, we propose a novel deep learning approach to malware classification. The proposed method converts malware executables into image-based representations. These images are then classified into different malware families using an autoencoder enhanced deep convolutional neural network (AE-DCNN). In particular, we propose a novel training mechanism wherein a DCNN classifier is trained with the help of an encoder. We conjecture that using an encoder in the proposed way provides the classifier with the extra information that is perhaps lost during the forward propagation, thereby leading to better results. The proposed approach eliminates the use of feature engineering, reverse engineering, disassembly, and other domain-specific techniques earlier used for malware classification. On the standard Malimg dataset, we achieve a 10-fold cross-validation accuracy of 99.38% and F1-score of 99.38%. Further, due to the texture-based analysis of malware files, the proposed technique is resilient to several obfuscation techniques.
Lee, Shan-Hsin, Lan, Shen-Chieh, Huang, Hsiu-Chuan, Hsu, Chia-Wei, Chen, Yung-Shiu, Shieh, Shiuhpyng.  2021.  EC-Model: An Evolvable Malware Classification Model. 2021 IEEE Conference on Dependable and Secure Computing (DSC). :1–8.
Malware evolves quickly as new attack, evasion and mutation techniques are commonly used by hackers to build new malicious malware families. For malware detection and classification, multi-class learning model is one of the most popular machine learning models being used. To recognize malicious programs, multi-class model requires malware types to be predefined as output classes in advance which cannot be dynamically adjusted after the model is trained. When a new variant or type of malicious programs is discovered, the trained multi-class model will be no longer valid and have to be retrained completely. This consumes a significant amount of time and resources, and cannot adapt quickly to meet the timely requirement in dealing with dynamically evolving malware types. To cope with the problem, an evolvable malware classification deep learning model, namely EC-Model, is proposed in this paper which can dynamically adapt to new malware types without the need of fully retraining. Consequently, the reaction time can be significantly reduced to meet the timely requirement of malware classification. To our best knowledge, our work is the first attempt to adopt multi-task, deep learning for evolvable malware classification.
Priyadarshan, Pradosh, Sarangi, Prateek, Rath, Adyasha, Panda, Ganapati.  2021.  Machine Learning Based Improved Malware Detection Schemes. 2021 11th International Conference on Cloud Computing, Data Science Engineering (Confluence). :925–931.
In recent years, cyber security has become a challenging task to protect the networks and computing systems from various types of digital attacks. Therefore, to preserve these systems, various innovative methods have been reported and implemented in practice. However, still more research work needs to be carried out to have malware free computing system. In this paper, an attempt has been made to develop simple but reliable ML based malware detection systems which can be implemented in practice. Keeping this in view, the present paper has proposed and compared the performance of three ML based malware detection systems applicable for computer systems. The proposed methods include k-NN, RF and LR for detection purpose and the features extracted comprise of Byte and ASM. The performance obtained from the simulation study of the proposed schemes has been evaluated in terms of ROC, Log loss plot, accuracy, precision, recall, specificity, sensitivity and F1-score. The analysis of the various results clearly demonstrates that the RF based malware detection scheme outperforms the model based on k-NN and LR The efficiency of detection of proposed ML models is either same or comparable to deep learning-based methods.
Catak, Evren, Catak, Ferhat Ozgur, Moldsvor, Arild.  2021.  Adversarial Machine Learning Security Problems for 6G: mmWave Beam Prediction Use-Case. 2021 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). :1–6.
6G is the next generation for the communication systems. In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. The predictive algorithms will be used in 6G problems. With the rapid developments of deep learning techniques, it is critical to take the security concern into account when applying the algorithms. While machine learning offers significant advantages for 6G, AI models’ security is normally ignored. Due to the many applications in the real world, security is a vital part of the algorithms. This paper proposes a mitigation method for adversarial attacks against proposed 6G machine learning models for the millimeter-wave (mmWave) beam prediction using adversarial learning. The main idea behind adversarial attacks against machine learning models is to produce faulty results by manipulating trained deep learning models for 6G applications for mmWave beam prediction. We also present the adversarial learning mitigation method’s performance for 6G security in millimeter-wave beam prediction application with fast gradient sign method attack. The mean square errors of the defended model under attack are very close to the undefended model without attack.
Han, Sung-Hwa.  2021.  Analysis of Data Transforming Technology for Malware Detection. 2021 21st ACIS International Winter Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Winter). :224–229.
As AI technology advances and its use increases, efforts to incorporate machine learning for malware detection are increasing. However, for malware learning, a standardized data set is required. Because malware is unstructured data, it cannot be directly learned. In order to solve this problem, many studies have attempted to convert unstructured data into structured data. In this study, the features and limitations of each were analyzed by investigating and analyzing the method of converting unstructured data proposed in each study into structured data. As a result, most of the data conversion techniques suggest conversion mechanisms, but the scope of each technique has not been determined. The resulting data set is not suitable for use as training data because it has infinite properties.
2022-01-31
Dai, Wei, Berleant, Daniel.  2021.  Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. 2021 IEEE International Conference on Big Data (Big Data). :5085–5094.
Deep learning (DL) classifiers are often unstable in that they may change significantly when retested on perturbed images or low quality images. This paper adds to the fundamental body of work on the robustness of DL classifiers. We introduce a new two-dimensional benchmarking matrix to evaluate robustness of DL classifiers, and we also innovate a four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we create comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt & pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt & pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and results are shared on the GitHub website at https://github.com/caperock/robustai to support future academic research and industry projects.
El-Allami, Rida, Marchisio, Alberto, Shafique, Muhammad, Alouani, Ihsen.  2021.  Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters. 2021 Design, Automation Test in Europe Conference Exhibition (DATE). :774–779.
Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest for DL trustworthiness, recent works claimed the inherent robustness of Spiking Neural Networks (SNNs) to these attacks, without considering the variability in their structural spiking parameters. This paper explores the security enhancement of SNNs through internal structural parameters. Specifically, we investigate the SNNs robustness to adversarial attacks with different values of the neuron's firing voltage thresholds and time window boundaries. We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters. Our results show a significant impact of the structural parameters on the SNNs' security, and promising sweet spots can be reached to design trustworthy SNNs with 85% higher robustness than a traditional non-spiking DL system. To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks. The proposed contributions and the experimental framework is available online 11https://github.com/rda-ela/SNN-Adversarial-Attacks to the community for reproducible research.
Kumová, Věra, Pilát, Martin.  2021.  Beating White-Box Defenses with Black-Box Attacks. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Deep learning has achieved great results in the last decade, however, it is sensitive to so called adversarial attacks - small perturbations of the input that cause the network to classify incorrectly. In the last years a number of attacks and defenses against these attacks were described. Most of the defenses however focus on defending against gradient-based attacks. In this paper, we describe an evolutionary attack and show that the adversarial examples produced by the attack have different features than those from gradient-based attacks. We also show that these features mean that one of the state-of-the-art defenses fails to detect such attacks.
Zhao, Rui.  2021.  The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms. 2021 2nd International Conference on Computing and Data Science (CDS). :287–295.
With the further development in the fields of computer vision, network security, natural language processing and so on so forth, deep learning technology gradually exposed certain security risks. The existing deep learning algorithms cannot effectively describe the essential characteristics of data, making the algorithm unable to give the correct result in the face of malicious input. Based on current security threats faced by deep learning, this paper introduces the problem of adversarial examples in deep learning, sorts out the existing attack and defense methods of black box and white box, and classifies them. It briefly describes the application of some adversarial examples in different scenarios in recent years, compares several defense technologies of adversarial examples, and finally summarizes the problems in this research field and prospects its future development. This paper introduces the common white box attack methods in detail, and further compares the similarities and differences between the attack of black and white boxes. Correspondingly, the author also introduces the defense methods, and analyzes the performance of these methods against the black and white box attack.
2022-01-25
Sun, Hao, Xu, Yanjie, Kuang, Gangyao, Chen, Jin.  2021.  Adversarial Robustness Evaluation of Deep Convolutional Neural Network Based SAR ATR Algorithm. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. :5263–5266.
Robustness, both to accident and to malevolent perturbations, is a crucial determinant of the successful deployment of deep convolutional neural network based SAR ATR systems in various security-sensitive applications. This paper performs a detailed adversarial robustness evaluation of deep convolutional neural network based SAR ATR models across two public available SAR target recognition datasets. For each model, seven different adversarial perturbations, ranging from gradient based optimization to self-supervised feature distortion, are generated for each testing image. Besides adversarial average recognition accuracy, feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks, which promotes the understanding of vulnerability of deep learning models.
Islam, Muhammad Aminul, Veal, Charlie, Gouru, Yashaswini, Anderson, Derek T..  2021.  Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Mathematical morphology has been explored in deep learning architectures, as a substitute to convolution, for problems like pattern recognition and object detection. One major advantage of using morphology in deep learning is the utility of morphological erosion and dilation. Specifically, these operations naturally embody interpretability due to their underlying connections to the analysis of geometric structures. While the use of these operations results in explainable learned filters, morphological deep learning lacks attribution modeling, i.e., a paradigm to specify what areas of the original observed image are important. Furthermore, convolution-based deep learning has achieved attribution modeling through a variety of neural eXplainable Artificial Intelligence (XAI) paradigms (e.g., saliency maps, integrated gradients, guided backpropagation, and gradient class activation mapping). Thus, a problem for morphology-based deep learning is that these XAI methods do not have a morphological interpretation due to the differences in the underlying mathematics. Herein, we extend the neural XAI paradigm of saliency maps to morphological deep learning, and by doing, so provide an example of morphological attribution modeling. Furthermore, our qualitative results highlight some advantages of using morphological attribution modeling.
Marulli, Fiammetta, Balzanella, Antonio, Campanile, Lelio, Iacono, Mauro, Mastroianni, Michele.  2021.  Exploring a Federated Learning Approach to Enhance Authorship Attribution of Misleading Information from Heterogeneous Sources. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Authorship Attribution (AA) is currently applied in several applications, among which fraud detection and anti-plagiarism checks: this task can leverage stylometry and Natural Language Processing techniques. In this work, we explored some strategies to enhance the performance of an AA task for the automatic detection of false and misleading information (e.g., fake news). We set up a text classification model for AA based on stylometry exploiting recurrent deep neural networks and implemented two learning tasks trained on the same collection of fake and real news, comparing their performances: one is based on Federated Learning architecture, the other on a centralized architecture. The goal was to discriminate potential fake information from true ones when the fake news comes from heterogeneous sources, with different styles. Preliminary experiments show that a distributed approach significantly improves recall with respect to the centralized model. As expected, precision was lower in the distributed model. This aspect, coupled with the statistical heterogeneity of data, represents some open issues that will be further investigated in future work.
2022-01-10
Freas, Christopher B., Shah, Dhara, Harrison, Robert W..  2021.  Accuracy and Generalization of Deep Learning Applied to Large Scale Attacks. 2021 IEEE International Conference on Communications Workshops (ICC Workshops). :1–6.
Distributed denial of service attacks threaten the security and health of the Internet. Remediation relies on up-to-date and accurate attack signatures. Signature-based detection is relatively inexpensive computationally. Yet, signatures are inflexible when small variations exist in the attack vector. Attackers exploit this rigidity by altering their attacks to bypass the signatures. Our previous work revealed a critical problem with conventional machine learning models. Conventional models are unable to generalize on the temporal nature of network flow data to classify attacks. We thus explored the use of deep learning techniques on real flow data. We found that a variety of attacks could be identified with high accuracy compared to previous approaches. We show that a convolutional neural network can be implemented for this problem that is suitable for large volumes of data while maintaining useful levels of accuracy.
Ugwu, Chukwuemeka Christian, Obe, Olumide Olayinka, Popoọla, Olugbemiga Solomon, Adetunmbi, Adebayo Olusọla.  2021.  A Distributed Denial of Service Attack Detection System using Long Short Term Memory with Singular Value Decomposition. 2020 IEEE 2nd International Conference on Cyberspac (CYBER NIGERIA). :112–118.
The increase in online activity during the COVID 19 pandemic has generated a surge in network traffic capable of expanding the scope of DDoS attacks. Cyber criminals can now afford to launch massive DDoS attacks capable of degrading the performances of conventional machine learning based IDS models. Hence, there is an urgent need for an effective DDoS attack detective model with the capacity to handle large magnitude of DDoS attack traffic. This study proposes a deep learning based DDoS attack detection system using Long Short Term Memory (LSTM). The proposed model was evaluated on UNSW-NB15 and NSL-KDD intrusion datasets, whereby twenty-three (23) and twenty (20) attack features were extracted from UNSW-NB15 and NSL-KDD, respectively using Singular Value Decomposition (SVD). The results from the proposed model show significant improvement when compared with results from some conventional machine learning techniques such as Naïve Bayes (NB), Decision Tree (DT), and Support Vector Machine (SVM) with accuracies of 94.28% and 90.59% on both datasets, respectively. Furthermore, comparative analysis of LSTM with other deep learning results reported in literature justified the choice of LSTM among its deep learning peers in detecting DDoS attacks over a network.
Paul, Avishek, Islam, Md Rabiul.  2021.  An Artificial Neural Network Based Anomaly Detection Method in CAN Bus Messages in Vehicles. 2021 International Conference on Automation, Control and Mechatronics for Industry 4.0 (ACMI). :1–5.

Controller Area Network is the bus standard that works as a central system inside the vehicles for communicating in-vehicle messages. Despite having many advantages, attackers may hack into a car system through CAN bus, take control of it and cause serious damage. For, CAN bus lacks security services like authentication, encryption etc. Therefore, an anomaly detection system must be integrated with CAN bus in vehicles. In this paper, we proposed an Artificial Neural Network based anomaly detection method to identify illicit messages in CAN bus. We trained our model with two types of attacks so that it can efficiently identify the attacks. When tested, the proposed algorithm showed high performance in detecting Denial of Service attacks (with accuracy 100%) and Fuzzy attacks (with accuracy 99.98%).

Sallam, Youssef F., Ahmed, Hossam El-din H., Saleeb, Adel, El-Bahnasawy, Nirmeen A., El-Samie, Fathi E. Abd.  2021.  Implementation of Network Attack Detection Using Convolutional Neural Network. 2021 International Conference on Electronic Engineering (ICEEM). :1–6.
The Internet obviously has a major impact on the global economy and human life every day. This boundless use pushes the attack programmers to attack the data frameworks on the Internet. Web attacks influence the reliability of the Internet and its administrations. These attacks are classified as User-to-Root (U2R), Remote-to-Local (R2L), Denial-of-Service (DoS) and Probing (Probe). Subsequently, making sure about web framework security and protecting data are pivotal. The conventional layers of safeguards like antivirus scanners, firewalls and proxies, which are applied to treat the security weaknesses are insufficient. So, Intrusion Detection Systems (IDSs) are utilized to screen PC and data frameworks for security shortcomings. IDS adds more effectiveness in securing networks against attacks. This paper presents an IDS model based on Deep Learning (DL) with Convolutional Neural Network (CNN) hypothesis. The model has been evaluated on the NSLKDD dataset. It has been trained by Kddtrain+ and tested twice, once using kddtrain+ and the other using kddtest+. The achieved test accuracies are 99.7% and 98.43% with 0.002 and 0.02 wrong alert rates for the two test scenarios, respectively.
Zheng, Shiji.  2021.  Network Intrusion Detection Model Based on Convolutional Neural Network. 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). 5:634–637.
Network intrusion detection is an important research direction of network security. The diversification of network intrusion mode and the increasing amount of network data make the traditional detection methods can not meet the requirements of the current network environment. The development of deep learning technology and its successful application in the field of artificial intelligence provide a new solution for network intrusion detection. In this paper, the convolutional neural network in deep learning is applied to network intrusion detection, and an intelligent detection model which can actively learn is established. The experiment on KDD99 data set shows that it can effectively improve the accuracy and adaptive ability of intrusion detection, and has certain effectiveness and advancement.
2021-12-22
Poli, Jean-Philippe, Ouerdane, Wassila, Pierrard, Régis.  2021.  Generation of Textual Explanations in XAI: The Case of Semantic Annotation. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–6.
Semantic image annotation is a field of paramount importance in which deep learning excels. However, some application domains, like security or medicine, may need an explanation of this annotation. Explainable Artificial Intelligence is an answer to this need. In this work, an explanation is a sentence in natural language that is dedicated to human users to provide them clues about the process that leads to the decision: the labels assignment to image parts. We focus on semantic image annotation with fuzzy logic that has proven to be a useful framework that captures both image segmentation imprecision and the vagueness of human spatial knowledge and vocabulary. In this paper, we present an algorithm for textual explanation generation of the semantic annotation of image regions.
Nascita, Alfredo, Montieri, Antonio, Aceto, Giuseppe, Ciuonzo, Domenico, Persico, Valerio, Pescapè, Antonio.  2021.  Unveiling MIMETIC: Interpreting Deep Learning Traffic Classifiers via XAI Techniques. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :455–460.
The widespread use of powerful mobile devices has deeply affected the mix of traffic traversing both the Internet and enterprise networks (with bring-your-own-device policies). Traffic encryption has become extremely common, and the quick proliferation of mobile apps and their simple distribution and update have created a specifically challenging scenario for traffic classification and its uses, especially network-security related ones. The recent rise of Deep Learning (DL) has responded to this challenge, by providing a solution to the time-consuming and human-limited handcrafted feature design, and better clas-sification performance. The counterpart of the advantages is the lack of interpretability of these black-box approaches, limiting or preventing their adoption in contexts where the reliability of results, or interpretability of polices is necessary. To cope with these limitations, eXplainable Artificial Intelligence (XAI) techniques have seen recent intensive research. Along these lines, our work applies XAI-based techniques (namely, Deep SHAP) to interpret the behavior of a state-of-the-art multimodal DL traffic classifier. As opposed to common results seen in XAI, we aim at a global interpretation, rather than sample-based ones. The results quantify the importance of each modality (payload- or header-based), and of specific subsets of inputs (e.g., TLS SNI and TCP Window Size) in determining the classification outcome, down to per-class (viz. application) level. The analysis is based on a publicly-released recent dataset focused on mobile app traffic.
Kim, Jiha, Park, Hyunhee.  2021.  OA-GAN: Overfitting Avoidance Method of GAN Oversampling Based on xAI. 2021 Twelfth International Conference on Ubiquitous and Future Networks (ICUFN). :394–398.
The most representative method of deep learning is data-driven learning. These methods are often data-dependent, and lack of data leads to poor learning. There is a GAN method that creates a likely image as a way to solve a problem that lacks data. The GAN determines that the discriminator is fake/real with respect to the image created so that the generator learns. However, overfitting problems when the discriminator becomes overly dependent on the learning data. In this paper, we explain overfitting problem when the discriminator decides to fake/real using xAI. Depending on the area of the described image, it is possible to limit the learning of the discriminator to avoid overfitting. By doing so, the generator can produce similar but more diverse images.
2021-12-21
Ayed, Mohamed Ali, Talhi, Chamseddine.  2021.  Federated Learning for Anomaly-Based Intrusion Detection. 2021 International Symposium on Networks, Computers and Communications (ISNCC). :1–8.
We are attending a severe zero-day cyber attacks. Machine learning based anomaly detection is definitely the most efficient defence in depth approach. It consists to analyzing the network traffic in order to distinguish the normal behaviour from the abnormal one. This approach is usually implemented in a central server where all the network traffic is analyzed which can rise privacy issues. In fact, with the increasing adoption of Cloud infrastructures, it is important to reduce as much as possible the outsourcing of such sensitive information to the several network nodes. A better approach is to ask each node to analyze its own data and then to exchange its learning finding (model) with a coordinator. In this paper, we investigate the application of federated learning for network-based intrusion detection. Our experiment was conducted based on the C ICIDS2017 dataset. We present a f ederated learning on a deep learning algorithm C NN based on model averaging. It is a self-learning system for detecting anomalies caused by malicious adversaries without human intervention and can cope with new and unknown attacks without decreasing performance. These experimentation demonstrate that this approach is effective in detecting intrusion.
2021-12-20
Sahay, Rajeev, Brinton, Christopher G., Love, David J..  2021.  Frequency-based Automated Modulation Classification in the Presence of Adversaries. ICC 2021 - IEEE International Conference on Communications. :1–6.
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals. Recent work has demonstrated the ability of deep learning to achieve robust AMC performance using raw in-phase and quadrature (IQ) time samples. Yet, deep learning models are highly susceptible to adversarial interference, which cause intelligent prediction models to misclassify received samples with high confidence. Furthermore, adversarial interference is often transferable, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification network. In this work, we present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference. Specifically, we show that adversarial attacks crafted to fool models trained on time-domain features are not easily transferable to models trained using frequency-domain features. In this capacity, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs). We further demonstrate our frequency feature-based classification models to achieve accuracies greater than 99% in the absence of attacks.