Biblio
Filters: Keyword is machine learning [Clear All Filters]
Adversarial Audio Detection Method Based on Transformer. 2022 International Conference on Machine Learning and Intelligent Systems Engineering (MLISE). :77–82.
.
2022. Speech recognition technology has been applied to all aspects of our daily life, but it faces many security issues. One of the major threats is the adversarial audio examples, which may tamper the recognition results of the acoustic speech recognition system (ASR). In this paper, we propose an adversarial detection framework to detect adversarial audio examples. The method is based on the transformer self-attention mechanism. Spectrogram features are extracted from the audio and divided into patches. Position information are embedded and then fed into transformer encoder. Experimental results show that the method achieves good performance with the detection accuracy of above 96.5% under the white-box attacks and blackbox attacks, and noisy circumstances. Even when detecting adversarial examples generated by the unknown attacks, it also achieves satisfactory results.
Analysis of classification based predicted disease using machine learning and medical things model. 2022 Second International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT). :1–6.
.
2022. {Health diseases have been issued seriously harmful in human life due to different dehydrated food and disturbance of working environment in the organization. Precise prediction and diagnosis of disease become a more serious and challenging task for primary deterrence, recognition, and treatment. Thus, based on the above challenges, we proposed the Medical Things (MT) and machine learning models to solve the healthcare problems with appropriate services in disease supervising, forecast, and diagnosis. We developed a prediction framework with machine learning approaches to get different categories of classification for predicted disease. The framework is designed by the fuzzy model with a decision tree to lessen the data complexity. We considered heart disease for experiments and experimental evaluation determined the prediction for categories of classification. The number of decision trees (M) with samples (MS), leaf node (ML), and learning rate (I) is determined as MS=20
Anomaly Detection based on Robust Spatial-temporal Modeling for Industrial Control Systems. 2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems (MASS). :355—363.
.
2022. Industrial Control Systems (ICS) are increasingly facing the threat of False Data Injection (FDI) attacks. As an emerging intrusion detection scheme for ICS, process-based Intrusion Detection Systems (IDS) can effectively detect the anomalies caused by FDI attacks. Specifically, such IDS establishes anomaly detection model which can describe the normal pattern of industrial processes, then perform real-time anomaly detection on industrial process data. However, this method suffers low detection accuracy due to the complexity and instability of industrial processes. That is, the process data inherently contains sophisticated nonlinear spatial-temporal correlations which are hard to be explicitly described by anomaly detection model. In addition, the noise and disturbance in process data prevent the IDS from distinguishing the real anomaly events. In this paper, we propose an Anomaly Detection approach based on Robust Spatial-temporal Modeling (AD-RoSM). Concretely, to explicitly describe the spatial-temporal correlations within the process data, a neural based state estimation model is proposed by utilizing 1D CNN for temporal modeling and multi-head self attention mechanism for spatial modeling. To perform robust anomaly detection in the presence of noise and disturbance, a composite anomaly discrimination model is designed so that the outputs of the state estimation model can be analyzed with a combination of threshold strategy and entropy-based strategy. We conducted extensive experiments on two benchmark ICS security datasets to demonstrate the effectiveness of our approach.
Application of Random Forest Classifier for Prevention and Detection of Distributed Denial of Service Attacks. 2022 OITS International Conference on Information Technology (OCIT). :380–384.
.
2022. A classification issue in machine learning is the issue of spotting Distributed Denial of Service (DDos) attacks. A Denial of Service (DoS) assault is essentially a deliberate attack launched from a single source with the implied intent of rendering the target's application unavailable. Attackers typically aims to consume all available network bandwidth in order to accomplish this, which inhibits authorized users from accessing system resources and denies them access. DDoS assaults, in contrast to DoS attacks, include several sources being used by the attacker to launch an attack. At the network, transportation, presentation, and application layers of a 7-layer OSI architecture, DDoS attacks are most frequently observed. With the help of the most well-known standard dataset and multiple regression analysis, we have created a machine learning model in this work that can predict DDoS and bot assaults based on traffic.
An Approach for P2P Based Botnet Detection Using Machine Learning. 2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT). :627–631.
.
2022. The internet has developed and transformed the world dramatically in recent years, which has resulted in several cyberattacks. Cybersecurity is one of society’s most serious challenge, costing millions of dollars every year. The research presented here will look into this area, focusing on malware that can establish botnets, and in particular, detecting connections made by infected workstations connecting with the attacker’s machine. In recent years, the frequency of network security incidents has risen dramatically. Botnets have previously been widely used by attackers to carry out a variety of malicious activities, such as compromising machines to monitor their activities by installing a keylogger or sniffing traffic, launching Distributed Denial of Service (DDOS) attacks, stealing the identity of the machine or credentials, and even exfiltrating data from the user’s computer. Botnet detection is still a work in progress because no one approach exists that can detect a botnet’s whole ecosystem. A detailed analysis of a botnet, discuss numerous parameter’s result of detection methods related to botnet attacks, as well as existing work of botnet identification in field of machine learning are discuss here. This paper focuses on the comparative analysis of various classifier based on design of botnet detection technique which are able to detect P2P botnet using machine learning classifier.
ARGUS – An Adaptive Smart Home Security Solution. 2022 4th International Conference on Advancements in Computing (ICAC). :459–464.
.
2022. Smart Security Solutions are in high demand with the ever-increasing vulnerabilities within the IT domain. Adjusting to a Work-From-Home (WFH) culture has become mandatory by maintaining required core security principles. Therefore, implementing and maintaining a secure Smart Home System has become even more challenging. ARGUS provides an overall network security coverage for both incoming and outgoing traffic, a firewall and an adaptive bandwidth management system and a sophisticated CCTV surveillance capability. ARGUS is such a system that is implemented into an existing router incorporating cloud and Machine Learning (ML) technology to ensure seamless connectivity across multiple devices, including IoT devices at a low migration cost for the customer. The aggregation of the above features makes ARGUS an ideal solution for existing Smart Home System service providers and users where hardware and infrastructure is also allocated. ARGUS was tested on a small-scale smart home environment with a Raspberry Pi 4 Model B controller. Its intrusion detection system identified an intrusion with 96% accuracy while the physical surveillance system predicts the user with 81% accuracy.
Authorship Verification via Linear Correlation Methods of n-gram and Syntax Metrics. 2022 Intermountain Engineering, Technology and Computing (IETC). :1–6.
.
2022. This research evaluates the accuracy of two methods of authorship prediction: syntactical analysis and n-gram, and explores its potential usage. The proposed algorithm measures n-gram, and counts adjectives, adverbs, verbs, nouns, punctuation, and sentence length from the training data, and normalizes each metric. The proposed algorithm compares the metrics of training samples to testing samples and predicts authorship based on the correlation they share for each metric. The severity of correlation between the testing and training data produces significant weight in the decision-making process. For example, if analysis of one metric approximates 100% positive correlation, the weight in the decision is assigned a maximum value for that metric. Conversely, a 100% negative correlation receives the minimum value. This new method of authorship validation holds promise for future innovation in fraud protection, the study of historical documents, and maintaining integrity within academia.
Autoencoder Based FDI Attack Detection Scheme For Smart Grid Stability. 2022 IEEE 19th India Council International Conference (INDICON). :1—5.
.
2022. One of the major concerns in the real-time monitoring systems in a smart grid is the Cyber security threat. The false data injection attack is emerging as a major form of attack in Cyber-Physical Systems (CPS). A False data Injection Attack (FDIA) can lead to severe issues like insufficient generation, physical damage to the grid, power flow imbalance as well as economical loss. The recent advancements in machine learning algorithms have helped solve the drawbacks of using classical detection techniques for such attacks. In this article, we propose to use Autoencoders (AE’s) as a novel Machine Learning approach to detect FDI attacks without any major modifications. The performance of the method is validated through the analysis of the simulation results. The algorithm achieves optimal accuracy owing to the unsupervised nature of the algorithm.
Automatic labeling of the elements of a vulnerability report CVE with NLP. 2022 IEEE 23rd International Conference on Information Reuse and Integration for Data Science (IRI). :164—165.
.
2022. Common Vulnerabilities and Exposures (CVE) databases contain information about vulnerabilities of software products and source code. If individual elements of CVE descriptions can be extracted and structured, then the data can be used to search and analyze CVE descriptions. Herein we propose a method to label each element in CVE descriptions by applying Named Entity Recognition (NER). For NER, we used BERT, a transformer-based natural language processing model. Using NER with machine learning can label information from CVE descriptions even if there are some distortions in the data. An experiment involving manually prepared label information for 1000 CVE descriptions shows that the labeling accuracy of the proposed method is about 0.81 for precision and about 0.89 for recall. In addition, we devise a way to train the data by dividing it into labels. Our proposed method can be used to label each element automatically from CVE descriptions.
Autonomic Dominant Resource Fairness (A-DRF) in Cloud Computing. 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC). :1626—1631.
.
2022. In the world of information technology and the Internet, which has become a part of human life today and is constantly expanding, Attention to the users' requirements such as information security, fast processing, dynamic and instant access, and costs savings has become essential. The solution that is proposed for such problems today is a technology that is called cloud computing. Today, cloud computing is considered one of the most essential distributed tools for processing and storing data on the Internet. With the increasing using this tool, the need to schedule tasks to make the best use of resources and respond appropriately to requests has received much attention, and in this regard, many efforts have been made and are being made. To this purpose, various algorithms have been proposed to calculate resource allocation, each of which has tried to solve equitable distribution challenges while using maximum resources. One of these calculation methods is the DRF algorithm. Although it offers a better approach than previous algorithms, it faces challenges, especially with time-consuming resource allocation computing. These challenges make the use of DRF more complex than ever in the low number of requests with high resource capacity as well as the high number of simultaneous requests. This study tried to reduce the computations costs associated with the DRF algorithm for resource allocation by introducing a new approach to using this DRF algorithm to automate calculations by machine learning and artificial intelligence algorithms (Autonomic Dominant Resource Fairness or A-DRF).
A Bagging MLP-based Autoencoder for Detection of False Data Injection Attack in Smart Grid. 2022 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT). :1—5.
.
2022. The accelerated move toward adopting the Smart Grid paradigm has resulted in numerous drawbacks as far as security is concerned. Traditional power grids are becoming more vulnerable to cyberattacks as all the control decisions are generated based on the data the Smart Grid generates during its operation. This data can be tampered with or attacked in communication lines to mislead the control room in decision-making. The false data injection attack (FDIA) is one of the most severe cyberattacks on today’s cyber-physical power system, as it has the potential to cause significant physical and financial damage. However, detecting cyberattacks are incredibly challenging since they have no known patterns. In this paper, we launch a random FDIA on IEEE-39 bus system. Later, we propose a Bagging MLP-based autoencoder to detect the FDIAs in the power system and compare the result with a single ML model. The Bagging MLP-based autoencoder outperforms the Isolation forest while detecting FDIAs.
Botnet Detection via Machine Learning Techniques. 2022 International Conference on Big Data, Information and Computer Network (BDICN). :831–836.
.
2022. The botnet is a serious network security threat that can cause servers crash, so how to detect the behavior of Botnet has already become an important part of the research of network security. DNS(Domain Name System) request is the first step for most of the mainframe computers controlled by Botnet to communicate with the C&C(command; control) server. The detection of DNS request domain names is an important way for mainframe computers controlled by Botnet. However, the detection method based on fixed rules is hard to take effect for botnet based on DGA(Domain Generation Algorithm) because malicious domain names keep evolving and derive many different generation methods. Contrasted with the traditional methods, the method based on machine learning is a better way to detect it by learning and modeling the DGA. This paper presents a method based on the Naive Bayes model, the XGBoost model, the SVM(Support Vector Machine) model, and the MLP(Multi-Layer Perceptron) model, and tests it with real data sets collected from DGA, Alexa, and Secrepo. The experimental results show the precision score, the recall score, and the F1 score for each model.
Classification of Mobile Phone Price Dataset Using Machine Learning Algorithms. 2022 3rd International Conference on Pattern Recognition and Machine Learning (PRML). :438—443.
.
2022. With the development of technology, mobile phones are an indispensable part of human life. Factors such as brand, internal memory, wifi, battery power, camera and availability of 4G are now modifying consumers' decisions on buying mobile phones. But people fail to link those factors with the price of mobile phones; in this case, this paper is aimed to figure out the problem by using machine learning algorithms like Support Vector Machine, Decision Tree, K Nearest Neighbors and Naive Bayes to train the mobile phone dataset before making predictions of the price level. We used appropriate algorithms to predict smartphone prices based on accuracy, precision, recall and F1 score. This not only helps customers have a better choice on the mobile phone but also gives advice to businesses selling mobile phones that the way to set reasonable prices with the different features they offer. This idea of predicting prices level will give support to customers to choose mobile phones wisely in the future. The result illustrates that among the 4 classifiers, SVM returns to the most desirable performance with 94.8% of accuracy, 97.3 of F1 score (without feature selection) and 95.5% of accuracy, 97.7% of F1 score (with feature selection).
Comparative Study of Machine Learning Techniques for Intrusion Detection Systems. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:274—283.
.
2022. Being a part of today’s technical world, we are connected through a vast network. More we are addicted to these modernization techniques we need security. There must be reliability in a network security system so that it is capable of doing perfect monitoring of the whole network of an organization so that any unauthorized users or intruders wouldn’t be able to halt our security breaches. Firewalls are there for securing our internal network from unauthorized outsiders but still some time possibility of attacks is there as according to a survey 60% of attacks were internal to the network. So, the internal system needs the same higher level of security just like external. So, understanding the value of security measures with accuracy, efficiency, and speed we got to focus on implementing and comparing an improved intrusion detection system. A comprehensive literature review has been done and found that some feature selection techniques with standard scaling combined with Machine Learning Techniques can give better results over normal existing ML Techniques. In this survey paper with the help of the Uni-variate Feature selection method, the selection of 14 essential features out of 41 is performed which are used in comparative analysis. We implemented and compared both binary class classification and multi-class classification-based Intrusion Detection Systems (IDS) for two Supervised Machine Learning Techniques Support Vector Machine and Classification and Regression Techniques.
A Comparative Study on Machine Learning based Cross Layer Security in Internet of Things (IoT). 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS). :267—273.
.
2022. The Internet of Things is a developing technology that converts physical objects into virtual objects connected to the internet using wired and wireless network architecture. Use of cross-layer techniques in the internet of things is primarily driven by the high heterogeneity of hardware and software capabilities. Although traditional layered architecture has been effective for a while, cross-layer protocols have the potential to greatly improve a number of wireless network characteristics, including bandwidth and energy usage. Also, one of the main concerns with the internet of things is security, and machine learning (ML) techniques are thought to be the most cuttingedge and viable approach. This has led to a plethora of new research directions for tackling IoT's growing security issues. In the proposed study, a number of cross-layer approaches based on machine learning techniques that have been offered in the past to address issues and challenges brought on by the variety of IoT are in-depth examined. Additionally, the main issues are mentioned and analyzed, including those related to scalability, interoperability, security, privacy, mobility, and energy utilization.
Cross-Layered Cyber-Physical Power System State Estimation towards a Secure Grid Operation. 2022 IEEE Power & Energy Society General Meeting (PESGM). :1—5.
.
2022. In the Smart Grid paradigm, this critical infrastructure operation is increasingly exposed to cyber-threats due to the increased dependency on communication networks. An adversary can launch an attack on a power grid operation through False Data Injection into system measurements and/or through attacks on the communication network, such as flooding the communication channels with unnecessary data or intercepting messages. A cross-layered strategy that combines power grid data, communication grid monitoring and Machine Learning-based processing is a promising solution for detecting cyber-threats. In this paper, an implementation of an integrated solution of a cross-layer framework is presented. The advantage of such a framework is the augmentation of valuable data that enhances the detection of anomalies in the operation of power grid. IEEE 118-bus system is built in Simulink to provide a power grid testing environment and communication network data is emulated using SimComponents. The performance of the framework is investigated under various FDI and communication attacks.
CR-Spectre: Defense-Aware ROP Injected Code-Reuse Based Dynamic Spectre. 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). :508–513.
.
2022. Side-channel attacks have been a constant threat to computing systems. In recent times, vulnerabilities in the architecture were discovered and exploited to mount and execute a state-of-the-art attack such as Spectre. The Spectre attack exploits a vulnerability in the Intel-based processors to leak confidential data through the covert channel. There exist some defenses to mitigate the Spectre attack. Among multiple defenses, hardware-assisted attack/intrusion detection (HID) systems have received overwhelming response due to its low overhead and efficient attack detection. The HID systems deploy machine learning (ML) classifiers to perform anomaly detection to determine whether the system is under attack. For this purpose, a performance monitoring tool profiles the applications to record hardware performance counters (HPC), utilized for anomaly detection. Previous HID systems assume that the Spectre is executed as a standalone application. In contrast, we propose an attack that dynamically generates variations in the injected code to evade detection. The attack is injected into a benign application. In this manner, the attack conceals itself as a benign application and gen-erates perturbations to avoid detection. For the attack injection, we exploit a return-oriented programming (ROP)-based code-injection technique that reuses the code, called gadgets, present in the exploited victim's (host) memory to execute the attack, which, in our case, is the CR-Spectre attack to steal sensitive data from a target victim (target) application. Our work focuses on proposing a dynamic attack that can evade HID detection by injecting perturbations, and its dynamically generated variations thereof, under the cloak of a benign application. We evaluate the proposed attack on the MiBench suite as the host. From our experiments, the HID performance degrades from 90% to 16%, indicating our Spectre-CR attack avoids detection successfully.
Cyber Threat Analysis and Trustworthy Artificial Intelligence. 2022 6th International Conference on Cryptography, Security and Privacy (CSP). :86—90.
.
2022. Cyber threats can cause severe damage to computing infrastructure and systems as well as data breaches that make sensitive data vulnerable to attackers and adversaries. It is therefore imperative to discover those threats and stop them before bad actors penetrating into the information systems.Threats hunting algorithms based on machine learning have shown great advantage over classical methods. Reinforcement learning models are getting more accurate for identifying not only signature-based but also behavior-based threats. Quantum mechanics brings a new dimension in improving classification speed with exponential advantage. The accuracy of the AI/ML algorithms could be affected by many factors, from algorithm, data, to prejudicial, or even intentional. As a result, AI/ML applications need to be non-biased and trustworthy.In this research, we developed a machine learning-based cyber threat detection and assessment tool. It uses two-stage (both unsupervised and supervised learning) analyzing method on 822,226 log data recorded from a web server on AWS cloud. The results show the algorithm has the ability to identify the threats with high confidence.
Cybers Security Analysis and Measurement Tools Using Machine Learning Approach. 2022 1st International Conference on AI in Cybersecurity (ICAIC). :1–4.
.
2022. Artificial intelligence (AI) and machine learning (ML) have been used in transforming our environment and the way people think, behave, and make decisions during the last few decades [1]. In the last two decades everyone connected to the Internet either an enterprise or individuals has become concerned about the security of his/their computational resources. Cybersecurity is responsible for protecting hardware and software resources from cyber attacks e.g. viruses, malware, intrusion, eavesdropping. Cyber attacks either come from black hackers or cyber warfare units. Artificial intelligence (AI) and machine learning (ML) have played an important role in developing efficient cyber security tools. This paper presents Latest Cyber Security Tools Based on Machine Learning which are: Windows defender ATP, DarckTrace, Cisco Network Analytic, IBM QRader, StringSifter, Sophos intercept X, SIME, NPL, and Symantec Targeted Attack Analytic.
Cybersecurity Education in the Age of Artificial Intelligence: A Novel Proactive and Collaborative Learning Paradigm. 2022 IEEE Frontiers in Education Conference (FIE). :1–5.
.
2022. This Innovative Practice Work-in-Progress paper presents a virtual, proactive, and collaborative learning paradigm that can engage learners with different backgrounds and enable effective retention and transfer of the multidisciplinary AI-cybersecurity knowledge. While progress has been made to better understand the trustworthiness and security of artificial intelligence (AI) techniques, little has been done to translate this knowledge to education and training. There is a critical need to foster a qualified cybersecurity workforce that understands the usefulness, limitations, and best practices of AI technologies in the cybersecurity domain. To address this import issue, in our proposed learning paradigm, we leverage multidisciplinary expertise in cybersecurity, AI, and statistics to systematically investigate two cohesive research and education goals. First, we develop an immersive learning environment that motivates the students to explore AI/machine learning (ML) development in the context of real-world cybersecurity scenarios by constructing learning models with tangible objects. Second, we design a proactive education paradigm with the use of hackathon activities based on game-based learning, lifelong learning, and social constructivism. The proposed paradigm will benefit a wide range of learners, especially underrepresented students. It will also help the general public understand the security implications of AI. In this paper, we describe our proposed learning paradigm and present our current progress of this ongoing research work. In the current stage, we focus on the first research and education goal and have been leveraging cost-effective Minecraft platform to develop an immersive learning environment where the learners are able to investigate the insights of the emerging AI/ML concepts by constructing related learning modules via interacting with tangible AI/ML building blocks.
ISSN: 2377-634X
Data Quality Problem in AI-Based Network Intrusion Detection Systems Studies and a Solution Proposal. 2022 14th International Conference on Cyber Conflict: Keep Moving! (CyCon). 700:367–383.
.
2022. Network Intrusion Detection Systems (IDSs) have been used to increase the level of network security for many years. The main purpose of such systems is to detect and block malicious activity in the network traffic. Researchers have been improving the performance of IDS technology for decades by applying various machine-learning techniques. From the perspective of academia, obtaining a quality dataset (i.e. a sufficient amount of captured network packets that contain both malicious and normal traffic) to support machine learning approaches has always been a challenge. There are many datasets publicly available for research purposes, including NSL-KDD, KDDCUP 99, CICIDS 2017 and UNSWNB15. However, these datasets are becoming obsolete over time and may no longer be adequate or valid to model and validate IDSs against state-of-the-art attack techniques. As attack techniques are continuously evolving, datasets used to develop and test IDSs also need to be kept up to date. Proven performance of an IDS tested on old attack patterns does not necessarily mean it will perform well against new patterns. Moreover, existing datasets may lack certain data fields or attributes necessary to analyse some of the new attack techniques. In this paper, we argue that academia needs up-to-date high-quality datasets. We compare publicly available datasets and suggest a way to provide up-to-date high-quality datasets for researchers and the security industry. The proposed solution is to utilize the network traffic captured from the Locked Shields exercise, one of the world’s largest live-fire international cyber defence exercises held annually by the NATO CCDCOE. During this three-day exercise, red team members consisting of dozens of white hackers selected by the governments of over 20 participating countries attempt to infiltrate the networks of over 20 blue teams, who are tasked to defend a fictional country called Berylia. After the exercise, network packets captured from each blue team’s network are handed over to each team. However, the countries are not willing to disclose the packet capture (PCAP) files to the public since these files contain specific information that could reveal how a particular nation might react to certain types of cyberattacks. To overcome this problem, we propose to create a dedicated virtual team, capture all the traffic from this team’s network, and disclose it to the public so that academia can use it for unclassified research and studies. In this way, the organizers of Locked Shields can effectively contribute to the advancement of future artificial intelligence (AI) enabled security solutions by providing annual datasets of up-to-date attack patterns.
ISSN: 2325-5374
Data Sanitization Approach to Mitigate Clean-Label Attacks Against Malware Detection Systems. MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM). :993–998.
.
2022. Machine learning (ML) models are increasingly being used in the development of Malware Detection Systems. Existing research in this area primarily focuses on developing new architectures and feature representation techniques to improve the accuracy of the model. However, recent studies have shown that existing state-of-the art techniques are vulnerable to adversarial machine learning (AML) attacks. Among those, data poisoning attacks have been identified as a top concern for ML practitioners. A recent study on clean-label poisoning attacks in which an adversary intentionally crafts training samples in order for the model to learn a backdoor watermark was shown to degrade the performance of state-of-the-art classifiers. Defenses against such poisoning attacks have been largely under-explored. We investigate a recently proposed clean-label poisoning attack and leverage an ensemble-based Nested Training technique to remove most of the poisoned samples from a poisoned training dataset. Our technique leverages the relatively large sensitivity of poisoned samples to feature noise that disproportionately affects the accuracy of a backdoored model. In particular, we show that for two state-of-the art architectures trained on the EMBER dataset affected by the clean-label attack, the Nested Training approach improves the accuracy of backdoor malware samples from 3.42% to 93.2%. We also show that samples produced by the clean-label attack often successfully evade malware classification even when the classifier is not poisoned during training. However, even in such scenarios, our Nested Training technique can mitigate the effect of such clean-label-based evasion attacks by recovering the model's accuracy of malware detection from 3.57% to 93.2%.
ISSN: 2155-7586
Data Volume Reduction for Deep Packet Inspection by Multi-layer Application Determination. 2022 IEEE International Conference on Cyber Security and Resilience (CSR). :44–49.
.
2022. Attack detection in enterprise networks is increasingly faced with large data volumes, in part high data bursts, and heavily fluctuating data flows that often cause arbitrary discarding of data packets in overload situations which can be used by attackers to hide attack activities. Attack detection systems usually configure a comprehensive set of signatures for known vulnerabilities in different operating systems, protocols, and applications. Many of these signatures, however, are not relevant in each context, since certain vulnerabilities have already been eliminated, or the vulnerable applications or operating system versions, respectively, are not installed on the involved systems. In this paper, we present an approach for clustering data flows to assign them to dedicated analysis units that contain only signature sets relevant for the analysis of these flows. We discuss the performance of this clustering and show how it can be used in practice to improve the efficiency of an analysis pipeline.
Data-Driven Digital Twins in Surgery utilizing Augmented Reality and Machine Learning. 2022 IEEE International Conference on Communications Workshops (ICC Workshops). :580–585.
.
2022. On the one hand, laparoscopic surgery as medical state-of-the-art method is minimal invasive, and thus less stressful for patients. On the other hand, laparoscopy implies higher demands on physicians, such as mental load or preparation time, hence appropriate technical support is essential for quality and suc-cess. Medical Digital Twins provide an integrated and virtual representation of patients' and organs' data, and thus a generic concept to make complex information accessible by surgeons. In this way, minimal invasive surgery could be improved significantly, but requires also a much more complex software system to achieve the various resulting requirements. The biggest challenges for these systems are the safe and precise mapping of the digital twin to reality, i.e. dealing with deformations, movement and distortions, as well as balance out the competing requirement for intuitive and immersive user access and security. The case study ARAILIS is presented as a proof in concept for such a system and provides a starting point for further research. Based on the insights delivered by this prototype, a vision for future Medical Digital Twins in surgery is derived and discussed.
ISSN: 2694-2941
DDoS Attack Detection and Botnet Prevention using Machine Learning. 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS). 1:1159–1163.
.
2022. One of the major threats in the cyber security and networking world is a Distributed Denial of Service (DDoS) attack. With massive development in Science and Technology, the privacy and security of various organizations are concerned. Computer Intrusion and DDoS attacks have always been a significant issue in networked environments. DDoS attacks result in non-availability of services to the end-users. It interrupts regular traffic flow and causes a flood of flooded packets, causing the system to crash. This research presents a Machine Learning-based DDoS attack detection system to overcome this challenge. For the training and testing purpose, we have used the NSL-KDD Dataset. Logistic Regression Classifier, Support Vector Machine, K Nearest Neighbour, and Decision Tree Classifier are examples of machine learning algorithms which we have used to train our model. The accuracy gained are 90.4, 90.36, 89.15 and 82.28 respectively. We have added a feature called BOTNET Prevention, which scans for Phishing URLs and prevents a healthy device from being a part of the botnet.
ISSN: 2575-7288