Biblio
Filters: Keyword is Training [Clear All Filters]
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. 2021 IEEE Symposium on Security and Privacy (SP). :866–882.
.
2021. Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage. DP formalizes this data leakage through a cryptographic game, where an adversary must predict if a model was trained on a dataset D, or a dataset D′ that differs in just one example. If observing the training algorithm does not meaningfully increase the adversary's odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially private. Hence, the purpose of privacy analysis is to upper bound the probability that any adversary could successfully guess which dataset the model was trained on.In our paper, we instantiate this hypothetical adversary in order to establish lower bounds on the probability that this distinguishing game can be won. We use this adversary to evaluate the importance of the adversary capabilities allowed in the privacy analysis of DP training algorithms.For DP-SGD, the most common method for training neural networks with differential privacy, our lower bounds are tight and match the theoretical upper bound. This implies that in order to prove better upper bounds, it will be necessary to make use of additional assumptions. Fortunately, we find that our attacks are significantly weaker when additional (realistic) restrictions are put in place on the adversary's capabilities. Thus, in the practical setting common to many real-world deployments, there is a gap between our lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as suggested by the theoretical bound.
AE-DCNN: Autoencoder Enhanced Deep Convolutional Neural Network For Malware Classification. 2021 International Conference on Intelligent Technologies (CONIT). :1–5.
.
2021. Malware classification is a problem of great significance in the domain of information security. This is because the classification of malware into respective families helps in determining their intent, activity, and level of threat. In this paper, we propose a novel deep learning approach to malware classification. The proposed method converts malware executables into image-based representations. These images are then classified into different malware families using an autoencoder enhanced deep convolutional neural network (AE-DCNN). In particular, we propose a novel training mechanism wherein a DCNN classifier is trained with the help of an encoder. We conjecture that using an encoder in the proposed way provides the classifier with the extra information that is perhaps lost during the forward propagation, thereby leading to better results. The proposed approach eliminates the use of feature engineering, reverse engineering, disassembly, and other domain-specific techniques earlier used for malware classification. On the standard Malimg dataset, we achieve a 10-fold cross-validation accuracy of 99.38% and F1-score of 99.38%. Further, due to the texture-based analysis of malware files, the proposed technique is resilient to several obfuscation techniques.
Analysis of Anomaly Detection Approaches Performed Through Deep Learning Methods in SCADA Systems. 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA). :1—6.
.
2021. Supervisory control and data acquisition (SCADA) systems are used with monitoring and control purposes for the process not to fail in industrial control systems. Today, the increase in the use of standard protocols, hardware, and software in the SCADA systems that can connect to the internet and institutional networks causes these systems to become a target for more cyber-attacks. Intrusion detection systems are used to reduce or minimize cyber-attack threats. The use of deep learning-based intrusion detection systems also increases in parallel with the increase in the amount of data in the SCADA systems. The unsupervised feature learning present in the deep learning approaches enables the learning of important features within the large datasets. The features learned in an unsupervised way by using deep learning techniques are used in order to classify the data as normal or abnormal. Architectures such as convolutional neural network (CNN), Autoencoder (AE), deep belief network (DBN), and long short-term memory network (LSTM) are used to learn the features of SCADA data. These architectures use softmax function, extreme learning machine (ELM), deep belief networks, and multilayer perceptron (MLP) in the classification process. In this study, anomaly-based intrusion detection systems consisting of convolutional neural network, autoencoder, deep belief network, long short-term memory network, or various combinations of these methods on the SCADA networks in the literature were analyzed and the positive and negative aspects of these approaches were explained through their attack detection performances.
Analysis of Encrypted Image Data with Deep Learning Models. 2021 International Conference on Information Security and Cryptology (ISCTURKEY). :121—126.
.
2021. While various encryption algorithms ensure data security, it is essential to determine the accuracy and loss values and performance status in the analyzes made to determine encrypted data by deep learning. In this research, the analysis steps made by applying deep learning methods to encrypted cifar10 picture data are presented practically. The data was tried to be estimated by training with VGG16, VGG19, ResNet50 deep learning models. During this period, the network’s performance was tried to be measured, and the accuracy and loss values in these calculations were shown graphically.
API Security in Large Enterprises: Leveraging Machine Learning for Anomaly Detection. 2021 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.
.
2021. Large enterprises offer thousands of micro-services applications to support their daily business activities by using Application Programming Interfaces (APIs). These applications generate huge amounts of traffic via millions of API calls every day, which is difficult to analyze for detecting any potential abnormal behaviour and application outage. This phenomenon makes Machine Learning (ML) a natural choice to leverage and analyze the API traffic and obtain intelligent predictions. This paper proposes an ML-based technique to detect and classify API traffic based on specific features like bandwidth and number of requests per token. We employ a Support Vector Machine (SVM) as a binary classifier to classify the abnormal API traffic using its linear kernel. Due to the scarcity of the API dataset, we created a synthetic dataset inspired by the real-world API dataset. Then we used the Gaussian distribution outlier detection technique to create a training labeled dataset simulating real-world API logs data which we used to train the SVM classifier. Furthermore, to find a trade-off between accuracy and false positives, we aim at finding the optimal value of the error term (C) of the classifier. The proposed anomaly detection method can be used in a plug and play manner, and fits into the existing micro-service architecture with little adjustments in order to provide accurate results in a fast and reliable way. Our results demonstrate that the proposed method achieves an F1-score of 0.964 in detecting anomalies in API traffic with a 7.3% of false positives rate.
Assessing Trustworthiness of IoT Applications Using Logic Circuits. 2021 IEEE East-West Design & Test Symposium (EWDTS). :1—4.
.
2021. The paper describes a methodology for assessing non-functional requirements, such as trust characteristics for applications running on computationally constrained devices in the Internet of Things. The methodology is demonstrated through an example of a microcontroller-based temperature monitoring system. The concepts of trust and trustworthiness for software and devices of the Internet of Things are complex characteristics for describing the correct and secure operation of such systems and include aspects of operational and information security, reliability, resilience and privacy. Machine learning models, which are increasingly often used for such tasks in recent years, are resource-consuming software implementations. The paper proposes to use a logic circuit model to implement the above algorithms as an additional module for computationally constrained devices for checking the trustworthiness of applications running on them. Such a module could be implemented as a hardware, for example, as an FPGA in order to achieve more effectiveness.
Backdoor Attack Against Speaker Verification. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2560–2564.
.
2021. Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data (e.g., data from the Internet or third-party data company). This raises the question of whether adopting untrusted third-party data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data. Specifically, we design a clustering-based attack scheme where poisoned samples from different clusters will contain different triggers (i.e., pre-defined utterances), based on our understanding of verification tasks. The infected models behave normally on benign samples, while attacker-specified unenrolled triggers will successfully pass the verification even if the attacker has no information about the enrolled speaker. We also demonstrate that existing back-door attacks cannot be directly adopted in attacking speaker verification. Our approach not only provides a new perspective for designing novel attacks, but also serves as a strong baseline for improving the robustness of verification methods. The code for reproducing main results is available at https://github.com/zhaitongqing233/Backdoor-attack-against-speaker-verification.
Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. 2021 IEEE International Conference on Big Data (Big Data). :5085–5094.
.
2021. Deep learning (DL) classifiers are often unstable in that they may change significantly when retested on perturbed images or low quality images. This paper adds to the fundamental body of work on the robustness of DL classifiers. We introduce a new two-dimensional benchmarking matrix to evaluate robustness of DL classifiers, and we also innovate a four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we create comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt & pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt & pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and results are shared on the GitHub website at https://github.com/caperock/robustai to support future academic research and industry projects.
Bridging the Gap: Adapting a Security Education Platform to a New Audience. 2021 IEEE Global Engineering Education Conference (EDUCON). :153—159.
.
2021. The current supply of a highly specialized cyber security professionals cannot meet the demands for societies seeking digitization. To close the skill gap, there is a need for introducing students in higher education to cyber security, and to combine theoretical knowledge with practical skills. This paper presents how the cyber security training platform Haaukins, initially developed to increase interest and knowledge of cyber security among high school students, was further developed to support the need for training in higher education. Based on the differences between the existing and new target audiences, a set of design principles were derived which shaped the technical adjustments required to provide a suitable platform - mainly related to dynamic tooling, centralized access to exercises, and scalability of the platform to support courses running over longer periods of time. The implementation of these adjustments has led to a series of teaching sessions in various institutions of higher education, demonstrating the viability for Haaukins for the new target audience.
A CGAN-based DDoS Attack Detection Method in SDN. 2021 International Wireless Communications and Mobile Computing (IWCMC). :1030—1034.
.
2021. Distributed denial of service (DDoS) attack is a common way of network attack. It has the characteristics of wide distribution, low cost and difficult defense. The traditional algorithms of machine learning (ML) have such shortcomings as excessive systemic overhead and low accuracy in detection of DDoS. In this paper, a CGAN (conditional generative adversarial networks, conditional GAN) -based method is proposed to detect the attack of DDoS. On off-line training, five features are extracted in order to adapt the input of neural network. On the online recognition, CGAN model is adopted to recognize the packets of DDoS attack. The experimental results demonstrate that our proposed method obtains the better performance than the random forest-based method.
Classification Coding and Image Recognition Based on Pulse Neural Network. 2021 IEEE International Conference on Artificial Intelligence and Industrial Design (AIID). :260–265.
.
2021. Based on the third generation neural network spiking neural network, this paper optimizes and improves a classification and coding method, and proposes an image recognition method. Firstly, the read image is converted into a spike sequence, and then the spike sequence is encoded in groups and sent to the neurons in the spike neural network. After learning and training for many times, the quantization standard code is obtained. In this process, the spike sequence transformation matrix and dynamic weight matrix are obtained, and the unclassified data are output through the same matrix for image recognition and classification. Simulation results show that the above methods can get correct coding and preliminary recognition classification, and the spiking neural network can be applied.
Cooperative Machine Learning Techniques for Cloud Intrusion Detection. 2021 International Wireless Communications and Mobile Computing (IWCMC). :837–842.
.
2021. Cloud computing is attracting a lot of attention in the past few years. Although, even with its wide acceptance, cloud security is still one of the most essential concerns of cloud computing. Many systems have been proposed to protect the cloud from attacks using attack signatures. Most of them may seem effective and efficient; however, there are many drawbacks such as the attack detection performance and the system maintenance. Recently, learning-based methods for security applications have been proposed for cloud anomaly detection especially with the advents of machine learning techniques. However, most researchers do not consider the attack classification which is an important parameter for proposing an appropriate countermeasure for each attack type. In this paper, we propose a new firewall model called Secure Packet Classifier (SPC) for cloud anomalies detection and classification. The proposed model is constructed based on collaborative filtering using two machine learning algorithms to gain the advantages of both learning schemes. This strategy increases the learning performance and the system's accuracy. To generate our results, a publicly available dataset is used for training and testing the performance of the proposed SPC. Our results show that the accuracy of the SPC model increases the detection accuracy by 20% compared to the existing machine learning algorithms while keeping a high attack detection rate.
Corner Case Data Description and Detection. 2021 IEEE/ACM 1st Workshop on AI Engineering - Software Engineering for AI (WAIN). :19–26.
.
2021. As the major factors affecting the safety of deep learning models, corner cases and related detection are crucial in AI quality assurance for constructing safety- and security-critical systems. The generic corner case researches involve two interesting topics. One is to enhance DL models' robustness to corner case data via the adjustment on parameters/structure. The other is to generate new corner cases for model retraining and improvement. However, the complex architecture and the huge amount of parameters make the robust adjustment of DL models not easy, meanwhile it is not possible to generate all real-world corner cases for DL training. Therefore, this paper proposes a simple and novel approach aiming at corner case data detection via a specific metric. This metric is developed on surprise adequacy (SA) which has advantages on capture data behaviors. Furthermore, targeting at characteristics of corner case data, three modifications on distanced-based SA are developed for classification applications in this paper. Consequently, through the experiment analysis on MNIST data and industrial data, the feasibility and usefulness of the proposed method on corner case data detection are verified.
Co-training For Image-Based Malware Classification. 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). :568–572.
.
2021. A malware detection model based on semi-supervised learning is proposed in the paper. Our model includes mainly three parts: malware visualization, feature extraction, and classification. Firstly, the malware visualization converts malware into grayscale images; then the features of the images are extracted to reflect the coding patterns of malware; finally, a collaborative learning model is applied to malware detections using both labeled and unlabeled software samples. The proposed model was evaluated based on two commonly used benchmark datasets. The results demonstrated that compared with traditional methods, our model not only reduced the cost of sample labeling but also improved the detection accuracy through incorporating unlabeled samples into the collaborative learning process, thereby achieved higher classification performance.
Covert Wireless Communications Under Quasi-Static Fading With Channel Uncertainty. IEEE Transactions on Information Forensics and Security. 16:1104–1116.
.
2021. Covert communications enable a transmitter to send information reliably in the presence of an adversary, who looks to detect whether the transmission took place or not. We consider covert communications over quasi-static block fading channels, where users suffer from channel uncertainty. We investigate the adversary Willie's optimal detection performance in two extreme cases, i.e., the case of perfect channel state information (CSI) and the case of channel distribution information (CDI) only. It is shown that in the large detection error regime, Willie's detection performances of these two cases are essentially indistinguishable, which implies that the quality of CSI does not help Willie in improving his detection performance. This result enables us to study the covert transmission design without the need to factor in the exact amount of channel uncertainty at Willie. We then obtain the optimal and suboptimal closed-form solution to the covert transmission design. Our result reveals fundamental difference in the design between the case of quasi-static fading channel and the previously studied case of non-fading AWGN channel.
Conference Name: IEEE Transactions on Information Forensics and Security
Cross-Site Scripting (XSS) and SQL Injection Attacks Multi-classification Using Bidirectional LSTM Recurrent Neural Network. 2021 IEEE International Conference on Progress in Informatics and Computing (PIC). :358–363.
.
2021. E-commerce, ticket booking, banking, and other web-based applications that deal with sensitive information, such as passwords, payment information, and financial information, are widespread. Some web developers may have different levels of understanding about securing an online application. The two vulnerabilities identified by the Open Web Application Security Project (OWASP) for its 2017 Top Ten List are SQL injection and Cross-site Scripting (XSS). Because of these two vulnerabilities, an attacker can take advantage of these flaws and launch harmful web-based actions. Many published articles concentrated on a binary classification for these attacks. This article developed a new approach for detecting SQL injection and XSS attacks using deep learning. SQL injection and XSS payloads datasets are combined into a single dataset. The word-embedding technique is utilized to convert the word’s text into a vector. Our model used BiLSTM to auto feature extraction, training, and testing the payloads dataset. BiLSTM classified the payloads into three classes: XSS, SQL injection attacks, and normal. The results showed great results in classifying payloads into three classes: XSS attacks, injection attacks, and non-malicious payloads. BiLSTM showed high performance reached 99.26% in terms of accuracy.
Cyber Warfare Threat Categorization on CPS by Dark Web Terrorist. 2021 IEEE 4th International Conference on Computing, Power and Communication Technologies (GUCON). :1—6.
.
2021. The Industrial Internet of Things (IIoT) also referred as Cyber Physical Systems (CPS) as critical elements, expected to play a key role in Industry 4.0 and always been vulnerable to cyber-attacks and vulnerabilities. Terrorists use cyber vulnerability as weapons for mass destruction. The dark web's strong transparency and hard-to-track systems offer a safe haven for criminal activity. On the dark web (DW), there is a wide variety of illicit material that is posted regularly. For supervised training, large-scale web pages are used in traditional DW categorization. However, new study is being hampered by the impossibility of gathering sufficiently illicit DW material and the time spent manually tagging web pages. We suggest a system for accurately classifying criminal activity on the DW in this article. Rather than depending on the vast DW training package, we used authorized regulatory to various types of illicit activity for training Machine Learning (ML) classifiers and get appreciable categorization results. Espionage, Sabotage, Electrical power grid, Propaganda and Economic disruption are the cyber warfare motivations and We choose appropriate data from the open source links for supervised Learning and run a categorization experiment on the illicit material obtained from the actual DW. The results shows that in the experimental setting, using TF-IDF function extraction and a AdaBoost classifier, we were able to achieve an accuracy of 0.942. Our method enables the researchers and System authoritarian agency to verify if their DW corpus includes such illicit activity depending on the applicable rules of the illicit categories they are interested in, allowing them to identify and track possible illicit websites in real time. Because broad training set and expert-supplied seed keywords are not required, this categorization approach offers another option for defining illicit activities on the DW.
The Cyber-MAR Project: First Results and Perspectives on the Use of Hybrid Cyber Ranges for Port Cyber Risk Assessment. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :409—414.
.
2021. With over 80% of goods transportation in volume carried by sea, ports are key infrastructures within the logistics value chain. To address the challenges of the globalized and competitive economy, ports are digitizing at a fast pace, evolving into smart ports. Consequently, the cyber-resilience of ports is essential to prevent possible disruptions to the economic supply chain. Over the last few years, there has been a significant increase in the number of disclosed cyber-attacks on ports. In this paper, we present the capabilities of a high-end hybrid cyber range for port cyber risks awareness and training. By describing a specific port use-case and the first results achieved, we draw perspectives for the use of cyber ranges for the training of port actors in cyber crisis management.
Dark web traffic detection method based on deep learning. 2021 IEEE 10th Data Driven Control and Learning Systems Conference (DDCLS). :842—847.
.
2021. Network traffic detection is closely related to network security, and it is also a hot research topic now. With the development of encryption technology, traffic detection has become more and more difficult, and many crimes have occurred on the dark web, so how to detect dark web traffic is the subject of this study. In this paper, we proposed a dark web traffic(Tor traffic) detection scheme based on deep learning and conducted experiments on public data sets. By analyzing the results of the experiment, our detection precision rate reached 95.47%.
DDoS Attack Detection System using Apache Spark. 2021 International Conference on Computer Communication and Informatics (ICCCI). :1—5.
.
2021. Distributed Denial of Service Attacks (DDoS) are most widely used cyber-attacks. Thus, design of DDoS detection mechanisms has attracted attention of researchers. Design of these mechanisms involves building statistical and machine learning models. Most of the work in design of mechanisms is focussed on improving the accuracy of the model. However, due to large volume of network traffic, scalability and performance of these techniques is an important research issue. In this work, we use Apache Spark framework for detection of DDoS attacks. We use NSL-KDD Cup as a benchmark dataset for experimental analysis. The results reveal that random forest performs better than decision trees and distributed processing improves the performance in terms of pre-processing and training time.
DDoS Attack Detection using Artificial Neural Network. 2021 International Conference on Industrial Electronics Research and Applications (ICIERA). :1—5.
.
2021. Distributed denial of service (DDoS) attacks is one of the most evolving threats in the current Internet situation and yet there is no effective mechanism to curb it. In the field of DDoS attacks, as in all other areas of cybersecurity, attackers are increasingly using sophisticated methods. The work in this paper focuses on using Artificial Neural Network to detect various types of DDOS attacks(UDP-Flood, Smurf, HTTP-Flood and SiDDoS). We would be mainly focusing on the network and transport layer DDoS attacks. Additionally, the time and space complexity is also calculated to further improve the efficiency of the model implemented and overcome the limitations found in the research gap. The results obtained from our analysis on the dataset show that our proposed methods can better detect the DDoS attack.
Deep Poisoning: Towards Robust Image Data Sharing against Visual Disclosure. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). :686–696.
.
2021. Due to respectively limited training data, different entities addressing the same vision task based on certain sensitive images may not train a robust deep network. This paper introduces a new vision task where various entities share task-specific image data to enlarge each other's training data volume without visually disclosing sensitive contents (e.g. illegal images). Then, we present a new structure-based training regime to enable different entities learn task-specific and reconstruction-proof image representations for image data sharing. Specifically, each entity learns a private Deep Poisoning Module (DPM) and insert it to a pre-trained deep network, which is designed to perform the specific vision task. The DPM deliberately poisons convolutional image features to prevent image reconstructions, while ensuring that the altered image data is functionally equivalent to the non-poisoned data for the specific vision task. Given this equivalence, the poisoned features shared from one entity could be used by another entity for further model refinement. Experimental results on image classification prove the efficacy of the proposed method.
Deep Reinforcement Learning for Mitigating Cyber-Physical DER Voltage Unbalance Attacks. 2021 American Control Conference (ACC). :2861–2867.
.
2021. The deployment of DER with smart-inverter functionality is increasing the controllable assets on power distribution networks and, consequently, the cyber-physical attack surface. Within this work, we consider the use of reinforcement learning as an online controller that adjusts DER Volt/Var and Volt/Watt control logic to mitigate network voltage unbalance. We specifically focus on the case where a network-aware cyber-physical attack has compromised a subset of single-phase DER, causing a large voltage unbalance. We show how deep reinforcement learning successfully learns a policy minimizing the unbalance, both during normal operation and during a cyber-physical attack. In mitigating the attack, the learned stochastic policy operates alongside legacy equipment on the network, i.e. tap-changing transformers, adjusting optimally predefined DER control-logic.
Defences Against web Application Attacks and Detecting Phishing Links Using Machine Learning. 2020 International Conference on Computer, Control, Electrical, and Electronics Engineering (ICCCEEE). :1–6.
.
2021. In recent years web applications that are hacked every day estimated to be 30 000, and in most cases, web developers or website owners do not even have enough knowledge about what is happening on their sites. Web hackers can use many attacks to gain entry or compromise legitimate web applications, they can also deceive people by using phishing sites to collect their sensitive and private information. In response to this, the need is raised to take proper measures to understand the risks and be aware of the vulnerabilities that may affect the website and hence the normal business flow. In the scope of this study, mitigations against the most common web application attacks are set, and the web administrator is provided with ways to detect phishing links which is a social engineering attack, the study also demonstrates the generation of web application logs that simplifies the process of analyzing the actions of abnormal users to show when behavior is out of bounds, out of scope, or against the rules. The methods of mitigation are accomplished by secure coding techniques and the methods for phishing link detection are performed by various machine learning algorithms and deep learning techniques. The developed application has been tested and evaluated against various attack scenarios, the outcomes obtained from the test process showed that the website had successfully mitigated these dangerous web application attacks, and for the detection of phishing links part, a comparison is made between different algorithms to find the best one, and the outcome of the best model gave 98% accuracy.
Detecting Adversarial DDoS Attacks in Software- Defined Networking Using Deep Learning Techniques and Adversarial Training. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :448—454.
.
2021. In recent years, Deep Learning (DL) has been utilized for cyber-attack detection mechanisms as it offers highly accurate detection and is able to overcome the limitations of standard machine learning techniques. When applied in a Software-Defined Network (SDN) environment, a DL-based detection mechanism shows satisfying detection performance. However, in the case of adversarial attacks, the detection performance deteriorates. Therefore, in this paper, first, we outline a highly accurate flooding DDoS attack detection framework based on DL for SDN environments. Second, we investigate the performance degradation of our detection framework when being tested with two adversary traffic datasets. Finally, we evaluate three adversarial training procedures for improving the detection performance of our framework concerning adversarial attacks. It is shown that the application of one of the adversarial training procedures can avoid detection performance degradation and thus might be used in a real-time detection system based on continual learning.