Visible to the public Biblio

Found 1057 results

Filters: Keyword is machine learning  [Clear All Filters]
2022-03-15
Örs, Faik Kerem, Aydın, Mustafa, Boğatarkan, Aysu, Levi, Albert.  2021.  Scalable Wi-Fi Intrusion Detection for IoT Systems. 2021 11th IFIP International Conference on New Technologies, Mobility and Security (NTMS). :1—6.
The pervasive and resource-constrained nature of Internet of Things (IoT) devices makes them attractive to be targeted by different means of cyber threats. There are a vast amount of botnets being deployed every day that aim to increase their presence on the Internet for realizing malicious activities with the help of the compromised interconnected devices. Therefore, monitoring IoT networks using intrusion detection systems is one of the major countermeasures against such threats. In this work, we present a machine learning based Wi-Fi intrusion detection system developed specifically for IoT devices. We show that a single multi-class classifier, which operates on the encrypted data collected from the wireless data link layer, is able to detect the benign traffic and six types of IoT attacks with an overall accuracy of 96.85%. Our model is a scalable one since there is no need to train different classifiers for different IoT devices. We also present an alternative attack classifier that outperforms the attack classification model which has been developed in an existing study using the same dataset.
2022-03-14
Ali, Ahtasham, Al-Perumal, Sundresan.  2021.  Source Code Analysis for Mobile Applications for Privacy Leaks. 2021 IEEE Madras Section Conference (MASCON). :1—6.
Intelligent gadgets for example smartphones, tablet phones, and personal digital assistants play an increasingly important part in our lives and have become indispensable in our everyday routines. As a result, the market for mobile apps tends to grow at a rapid rate, and mobile app utilization has long eclipsed that of desktop software. The applications based on these smartphones are becoming vulnerable due to the use of open-source operating systems in these smart devices. These applications are vulnerable to smartphones because of memory leaks; they can steal personal data, hack our smartphones, and monitor our private activity, giving anyone significant financial loss. Because of these issues, smartphone security plays a vital role in our daily lives. The Play Store contains unrated applications which any unprofessional developer can develop, and these applications do not pass through the rigorous process of testing and analysis of code leaks. The existing developed system does not include a stringent procedure to examine and investigate source code to detect such vulnerabilities among mobile applications. This paper presented a dynamic analysis-based robust system for Source Code Analysis of Mobile Applications for Privacy Leaks using a machine learning algorithm. Furthermore, our framework is called Source Code Analysis of Mobile Applications (SCA-MA), which combines DynaLog and our machine learning-based classifier for Source Code Analysis of Mobile Applications. Our dataset will contain around 20000 applications to test and analyze vulnerabilities. We will perform dynamic analysis and separate the classification of vulnerable applications and safe applications. Our results show that we can detect vulnerabilities through our proposed system while reviewing code and provide better results than other existing frameworks. We have evaluated our large dataset with the pervasive way so we can detect even small privacy leak which can harm our app. Finally, we have compared results with existing methods, and framework performance is better than other methods.
Mehra, Misha, Paranjape, Jay N., Ribeiro, Vinay J..  2021.  Improving ML Detection of IoT Botnets using Comprehensive Data and Feature Sets. 2021 International Conference on COMmunication Systems NETworkS (COMSNETS). :438—446.
In recent times, the world has seen a tremendous increase in the number of attacks on IoT devices. A majority of these attacks have been botnet attacks, where an army of compromised IoT devices is used to launch DDoS attacks on targeted systems. In this paper, we study how the choice of a dataset and the extracted features determine the performance of a Machine Learning model, given the task of classifying Linux Binaries (ELFs) as being benign or malicious. Our work focuses on Linux systems since embedded Linux is the more popular choice for building today’s IoT devices and systems. We propose using 4 different types of files as the dataset for any ML model. These include system files, IoT application files, IoT botnet files and general malware files. Further, we propose using static, dynamic as well as network features to do the classification task. We show that existing methods leave out one or the other features, or file types and hence, our model outperforms them in terms of accuracy in detecting these files. While enhancing the dataset adds to the robustness of a model, utilizing all 3 types of features decreases the false positive and false negative rates non-trivially. We employ an exhaustive scenario based method for evaluating a ML model and show the importance of including each of the proposed files in a dataset. We also analyze the features and try to explain their importance for a model, using observed trends in different benign and malicious files. We perform feature extraction using the open source Limon sandbox, which prior to this work has been tested only on Ubuntu 14. We installed and configured it for Ubuntu 18, the documentation of which has been shared on Github.
Hahanov, V.I., Saprykin, A.S..  2021.  Federated Machine Learning Architecture for Searching Malware. 2021 IEEE East-West Design Test Symposium (EWDTS). :1—4.
Modern technologies for searching viruses, cloud-edge computing, and also federated algorithms and machine learning architectures are shown. The architectures for searching malware based on the xor metric applied in the design and test of computing systems are proposed. A Federated ML method is proposed for searching for malware, which significantly speeds up learning without the private big data of users. A federated infrastructure of cloud-edge computing is described. The use of signature analysis and the assertion engine for searching malware is shown. The paradigm of LTF-computing for searching destructive components in software applications is proposed.
Gustafson, Erik, Holzman, Burt, Kowalkowski, James, Lamm, Henry, Li, Andy C. Y., Perdue, Gabriel, Isakov, Sergei V., Martin, Orion, Thomson, Ross, Beall, Jackson et al..  2021.  Large scale multi-node simulations of ℤ2 gauge theory quantum circuits using Google Cloud Platform. 2021 IEEE/ACM Second International Workshop on Quantum Computing Software (QCS). :72—79.
Simulating quantum field theories on a quantum computer is one of the most exciting fundamental physics applications of quantum information science. Dynamical time evolution of quantum fields is a challenge that is beyond the capabilities of classical computing, but it can teach us important lessons about the fundamental fabric of space and time. Whether we may answer scientific questions of interest using near-term quantum computing hardware is an open question that requires a detailed simulation study of quantum noise. Here we present a large scale simulation study powered by a multi-node implementation of qsim using the Google Cloud Platform. We additionally employ newly-developed GPU capabilities in qsim and show how Tensor Processing Units — Application-specific Integrated Circuits (ASICs) specialized for Machine Learning — may be used to dramatically speed up the simulation of large quantum circuits. We demonstrate the use of high performance cloud computing for simulating ℤ2 quantum field theories on system sizes up to 36 qubits. We find this lattice size is not able to simulate our problem and observable combination with sufficient accuracy, implying more challenging observables of interest for this theory are likely beyond the reach of classical computation using exact circuit simulation.
2022-03-10
Tiwari, Sarthak, Bansal, Ajay.  2021.  Domain-Agnostic Context-Aware Framework for Natural Language Interface in a Task-Based Environment. 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC). :15—20.
Smart home assistants are becoming a norm due to their ease-of-use. They employ spoken language as an interface, facilitating easy interaction with their users. Even with their obvious advantages, natural-language based interfaces are not prevalent outside the domain of home assistants. It is hard to adopt them for computer-controlled systems due to the numerous complexities involved with their implementation in varying fields. The main challenge is the grounding of natural language base terms into the underlying system's primitives. The existing systems that do use natural language interfaces are specific to one problem domain only.In this paper, a domain-agnostic framework that creates natural language interfaces for computer-controlled systems has been developed by creating a customizable mapping between the language constructs and the system primitives. The framework employs ontologies built using OWL (Web Ontology Language) for knowledge representation and machine learning models for language processing tasks.
Pölöskei, István.  2021.  Continuous natural language processing pipeline strategy. 2021 IEEE 15th International Symposium on Applied Computational Intelligence and Informatics (SACI). :000221—000224.
Natural language processing (NLP) is a division of artificial intelligence. The constructed model's quality is entirely reliant on the training dataset's quality. A data streaming pipeline is an adhesive application, completing a managed connection from data sources to machine learning methods. The recommended NLP pipeline composition has well-defined procedures. The implemented message broker design is a usual apparatus for delivering events. It makes it achievable to construct a robust training dataset for machine learning use-case and serve the model's input. The reconstructed dataset is a valid input for the machine learning processes. Based on the data pipeline's product, the model recreation and redeployment can be scheduled automatically.
Ahirrao, Mayur, Joshi, Yash, Gandhe, Atharva, Kotgire, Sumeet, Deshmukh, Rohini G..  2021.  Phrase Composing Tool using Natural Language Processing. 2021 International Conference on Intelligent Technologies (CONIT). :1—4.
In this fast-running world, machine communication plays a vital role. To compete with this world, human-machine interaction is a necessary thing. To enhance this, Natural Language Processing technique is used widely. Using this technique, we can reduce the interaction gap between the machine and human. Till now, many such applications are developed which are using this technique.This tool deals with the various methods which are used for development of grammar error correction. These methods include rule-based method, classifier-based method and machine translation-based method. Also, models regarding the Natural Language Processing (NLP) pipeline are trained and implemented in this project accordingly. Additionally, the tool can also perform speech to text operation.
2022-03-08
Kim, Ji-Hoon, Park, Yeo-Reum, Do, Jaeyoung, Ji, Soo-Young, Kim, Joo-Young.  2021.  Accelerating Large-Scale Nearest Neighbor Search with Computational Storage Device. 2021 IEEE 29th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). :254—254.
K-nearest neighbor algorithm that searches the K closest samples in a high dimensional feature space is one of the most fundamental tasks in machine learning and image retrieval applications. Computational storage device that combines computing unit and storage module on a single board becomes popular to address the data bandwidth bottleneck of the conventional computing system. In this paper, we propose a nearest neighbor search acceleration platform based on computational storage device, which can process a large-scale image dataset efficiently in terms of speed, energy, and cost. We believe that the proposed acceleration platform is promising to be deployed in cloud datacenters for data-intensive applications.
Wang, Xinyi, Yang, Bo, Liu, Qi, Jin, Tiankai, Chen, Cailian.  2021.  Collaboratively Diagnosing IGBT Open-circuit Faults in Photovoltaic Inverters: A Decentralized Federated Learning-based Method. IECON 2021 – 47th Annual Conference of the IEEE Industrial Electronics Society. :1–6.
In photovoltaic (PV) systems, machine learning-based methods have been used for fault detection and diagnosis in the past years, which require large amounts of data. However, fault types in a single PV station are usually insufficient in practice. Due to insufficient and non-identically distributed data, packet loss and privacy concerns, it is difficult to train a model for diagnosing all fault types. To address these issues, in this paper, we propose a decentralized federated learning (FL)-based fault diagnosis method for insulated gate bipolar transistor (IGBT) open-circuits in PV inverters. All PV stations use the convolutional neural network (CNN) to train local diagnosis models. By aggregating neighboring model parameters, each PV station benefits from the fault diagnosis knowledge learned from neighbors and achieves diagnosing all fault types without sharing original data. Extensive experiments are conducted in terms of non-identical data distributions, various transmission channel conditions and whether to use the FL framework. The results are as follows: 1) Using data with non-identical distributions, the collaboratively trained model diagnoses faults accurately and robustly; 2) The continuous transmission and aggregation of model parameters in multiple rounds make it possible to obtain ideal training results even in the presence of packet loss; 3) The proposed method allows each PV station to diagnose all fault types without original data sharing, which protects data privacy.
2022-03-01
Kaur, Rajwinder, Kaur Sandhu, Jasminder.  2021.  A Study on Security Attacks in Wireless Sensor Network. 2021 International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE). :850–855.
Wireless Sensor Network (WSN)is the most promising area which is widely used in the field of military, healthcare systems, flood control, and weather forecasting system. In WSN every node is connected with another node and exchanges the information from one to another. While sending data between nodes data security is an important factor. Security is a vital issue in the area of networking. This paper addresses the issue of security in terms of distinct attacks and their solutions provided by the different authors. Whenever data is transferred from source to destination then it follows some route so there is a possibility of a malicious node in the network. It is a very difficult task to identify the malicious node present in the network. Insecurity intruder attacks on data packets that are transferred from one node to another node. While transferring the data from source to destination node hacker hacks the data and changes the actual data. In this paper, we have discussed the numerous security solution provided by the different authors and they had used the Machine Learning (ML) approach to handle the attacks. Various ML techniques are used to determine the authenticity of the node. Network attacks are elaborated according to the layer used for WSN architecture. In this paper, we will categorize the security attacks according to layer-wise and type-wise and represent the solution using the ML technique for handling the security attack.
Man, Jiaxi, Li, Wei, Wang, Hong, Ma, Weidong.  2021.  On the Technology of Frequency Hopping Communication Network-Station Selection. 2021 International Conference on Electronics, Circuits and Information Engineering (ECIE). :35–41.
In electronic warfare, communication may not counter reconnaissance and jamming without the help of network-station selection of frequency hopping. The competition in the field of electromagnetic spectrum is becoming more and more fierce with the increasingly complex electromagnetic environment of modern battlefield. The research on detection, identification, parameter estimation and network station selection of frequency hopping communication network has aroused the interest of scholars both at home and abroad, which has been summarized in this paper. Firstly, the working mode and characteristics of two kinds of FH communication networking modes synchronous orthogonal network and asynchronous non orthogonal network are introduced. Then, through the analysis of FH signals time hopping, frequency hopping, bandwidth, frequency, direction of arrival, bad time-frequency analysis, clustering analysis and machine learning method, the feature-based method is adopted Parameter selection technology is used to sort FH network stations. Finally, the key and difficult points of current research on FH communication network separation technology and the research status of blind source separation technology are introduced in details in this paper.
Huang, Shanshi, Peng, Xiaochen, Jiang, Hongwu, Luo, Yandong, Yu, Shimeng.  2021.  Exploiting Process Variations to Protect Machine Learning Inference Engine from Chip Cloning. 2021 IEEE International Symposium on Circuits and Systems (ISCAS). :1–5.
Machine learning inference engine is of great interest to smart edge computing. Compute-in-memory (CIM) architecture has shown significant improvements in throughput and energy efficiency for hardware acceleration. Emerging nonvolatile memory (eNVM) technologies offer great potentials for instant on and off by dynamic power gating. Inference engine is typically pre-trained by the cloud and then being deployed to the field. There is a new security concern on cloning of the weights stored on eNVM-based CIM chip. In this paper, we propose a countermeasure to the weight cloning attack by exploiting the process variations of the periphery circuitry. In particular, we use weight fine-tuning to compensate the analog-to-digital converter (ADC) offset for a specific chip instance while inducing significant accuracy drop for cloned chip instances. We evaluate our proposed scheme on a CIFAR-10 classification task using a VGG- 8 network. Our results show that with precisely chosen transistor size on the employed SAR-ADC, we could maintain 88% 90% accuracy for the fine-tuned chip while the same set of weights cloned on other chips will only have 20 40% accuracy on average. The weight fine-tune could be completed within one epoch of 250 iterations. On average only 0.02%, 0.025%, 0.142% of cells are updated for 2-bit, 4-bit, 8-bit weight precisions in each iteration.
Amaran, Sibi, Mohan, R. Madhan.  2021.  Intrusion Detection System Using Optimal Support Vector Machine for Wireless Sensor Networks. 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS). :1100–1104.
Wireless sensor networks (WSN) hold numerous battery operated, compact sized, and inexpensive sensor nodes, which are commonly employed to observe the physical parameters in the target environment. As the sensor nodes undergo arbitrary placement in the open areas, there is a higher possibility of affected by distinct kinds of attacks. For resolving the issue, intrusion detection system (IDS) is developed. This paper presents a new optimal Support Vector Machine (OSVM) based IDS in WSN. The presented OSVM model involves the proficient selection of optimal kernels in the SVM model using whale optimization algorithm (WOA) for intrusion detection. Since the SVM kernel gets altered using WOA, the application of OSVM model can be used for the detection of intrusions with proficient results. The performance of the OSVM model has been investigated on the benchmark NSL KDDCup 99 dataset. The resultant simulation values portrayed the effectual results of the OSVM model by obtaining a superior accuracy of 94.09% and detection rate of 95.02%.
Ding, Shanshuo, Wang, Yingxin, Kou, Liang.  2021.  Network Intrusion Detection Based on BiSRU and CNN. 2021 IEEE 18th International Conference on Mobile Ad Hoc and Smart Systems (MASS). :145–147.
In recent years, with the continuous development of artificial intelligence algorithms, their applications in network intrusion detection have become more and more widespread. However, as the network speed continues to increase, network traffic increases dramatically, and the drawbacks of traditional machine learning methods such as high false alarm rate and long training time are gradually revealed. CNN(Convolutional Neural Networks) can only extract spatial features of data, which is obviously insufficient for network intrusion detection. In this paper, we propose an intrusion detection model that combines CNN and BiSRU (Bi-directional Simple Recurrent Unit) to achieve the goal of intrusion detection by processing network traffic logs. First, we extract the spatial features of the original data using CNN, after that we use them as input, further extract the temporal features using BiSRU, and finally output the classification results by softmax to achieve the purpose of intrusion detection.
Sapre, Suchet, Islam, Khondkar, Ahmadi, Pouyan.  2021.  A Comprehensive Data Sampling Analysis Applied to the Classification of Rare IoT Network Intrusion Types. 2021 IEEE 18th Annual Consumer Communications Networking Conference (CCNC). :1–2.
With the rapid growth of Internet of Things (IoT) network intrusion attacks, there is a critical need for sophisticated and comprehensive intrusion detection systems (IDSs). Classifying infrequent intrusion types such as root-to-local (R2L) and user-to-root (U2R) attacks is a reoccurring problem for IDSs. In this study, various data sampling and class balancing techniques-Generative Adversarial Network (GAN)-based oversampling, k-nearest-neighbor (kNN) oversampling, NearMiss-1 undersampling, and class weights-were used to resolve the severe class imbalance affecting U2R and R2L attacks in the NSL-KDD intrusion detection dataset. Artificial Neural Networks (ANNs) were trained on the adjusted datasets, and their performances were evaluated with a multitude of classification metrics. Here, we show that using no data sampling technique (baseline), GAN-based oversampling, and NearMiss-l undersampling, all with class weights, displayed high performances in identifying R2L and U2R attacks. Of these, the baseline with class weights had the highest overall performance with an F1-score of 0.11 and 0.22 for the identification of U2R and R2L attacks, respectively.
2022-02-25
Xie, Bing, Tan, Zilong, Carns, Philip, Chase, Jeff, Harms, Kevin, Lofstead, Jay, Oral, Sarp, Vazhkudai, Sudharshan S., Wang, Feiyi.  2021.  Interpreting Write Performance of Supercomputer I/O Systems with Regression Models. 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS). :557—566.

This work seeks to advance the state of the art in HPC I/O performance analysis and interpretation. In particular, we demonstrate effective techniques to: (1) model output performance in the presence of I/O interference from production loads; (2) build features from write patterns and key parameters of the system architecture and configurations; (3) employ suitable machine learning algorithms to improve model accuracy. We train models with five popular regression algorithms and conduct experiments on two distinct production HPC platforms. We find that the lasso and random forest models predict output performance with high accuracy on both of the target systems. We also explore use of the models to guide adaptation in I/O middleware systems, and show potential for improvements of at least 15% from model-guided adaptation on 70% of samples, and improvements up to 10 x on some samples for both of the target systems.

Abutaha, Mohammed, Ababneh, Mohammad, Mahmoud, Khaled, Baddar, Sherenaz Al-Haj.  2021.  URL Phishing Detection using Machine Learning Techniques based on URLs Lexical Analysis. 2021 12th International Conference on Information and Communication Systems (ICICS). :147—152.
Phishing URLs mainly target individuals and/or organizations through social engineering attacks by exploiting the humans' weaknesses in information security awareness. These URLs lure online users to access fake websites, and harvest their confidential information, such as debit/credit card numbers and other sensitive information. In this work, we introduce a phishing detection technique based on URL lexical analysis and machine learning classifiers. The experiments were carried out on a dataset that originally contained 1056937 labeled URLs (phishing and legitimate). This dataset was processed to generate 22 different features that were reduced further to a smaller set using different features reduction techniques. Random Forest, Gradient Boosting, Neural Network and Support Vector Machine (SVM) classifiers were all evaluated, and results show the superiority of SVMs, which achieved the highest accuracy in detecting the analyzed URLs with a rate of 99.89%. Our approach can be incorporated within add-on/middleware features in Internet browsers for alerting online users whenever they try to access a phishing website using only its URL.
Wilms, Daniel, Stoecker, Carsten, Caballero, Juan.  2021.  Data Provenance in Vehicle Data Chains. 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring). :1–5.
With almost every new vehicle being connected, the importance of vehicle data is growing rapidly. Many mobility applications rely on the fusion of data coming from heterogeneous data sources, like vehicle and "smart-city" data or process data generated by systems out of their control. This external data determines much about the behaviour of the relying applications: it impacts the reliability, security and overall quality of the application's input data and ultimately of the application itself. Hence, knowledge about the provenance of that data is a critical component in any data-driven system. The secure traceability of the data handling along the entire processing chain, which passes through various distinct systems, is critical for the detection and avoidance of misuse and manipulation. In this paper, we introduce a mechanism for establishing secure data provenance in real time, demonstrating an exemplary use-case based on a machine learning model that detects dangerous driving situations. We show with our approach based on W3C decentralized identity standards that data provenance in closed data systems can be effectively achieved using technical standards designed for an open data approach.
2022-02-24
Kroeger, Trevor, Cheng, Wei, Guilley, Sylvain, Danger, Jean-Luc, Karimi, Nazhmeh.  2021.  Making Obfuscated PUFs Secure Against Power Side-Channel Based Modeling Attacks. 2021 Design, Automation Test in Europe Conference Exhibition (DATE). :1000–1005.
To enhance the security of digital circuits, there is often a desire to dynamically generate, rather than statically store, random values used for identification and authentication purposes. Physically Unclonable Functions (PUFs) provide the means to realize this feature in an efficient and reliable way by utilizing commonly overlooked process variations that unintentionally occur during the manufacturing of integrated circuits (ICs) due to the imperfection of fabrication process. When given a challenge, PUFs produce a unique response. However, PUFs have been found to be vulnerable to modeling attacks where by using a set of collected challenge response pairs (CRPs) and training a machine learning model, the response can be predicted for unseen challenges. To combat this vulnerability, researchers have proposed techniques such as Challenge Obfuscation. However, as shown in this paper, this technique can be compromised via modeling the PUF's power side-channel. We first show the vulnerability of a state-of-the-art Challenge Obfuscated PUF (CO-PUF) against power analysis attacks by presenting our attack results on the targeted CO-PUF. Then we propose two countermeasures, as well as their hybrid version, that when applied to the CO-PUFs make them resilient against power side-channel based modeling attacks. We also provide some insights on the proper design metrics required to be taken when implementing these mitigations. Our simulation results show the high success of our attack in compromising the original Challenge Obfuscated PUFs (success rate textgreater 98%) as well as the significant improvement on resilience of the obfuscated PUFs against power side-channel based modeling when equipped with our countermeasures.
Musa, Usman Shuaibu, Chakraborty, Sudeshna, Abdullahi, Muhammad M., Maini, Tarun.  2021.  A Review on Intrusion Detection System Using Machine Learning Techniques. 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). :541–549.
Computer networks are exposed to cyber related attacks due to the common usage of internet, as the result of such, several intrusion detection systems (IDSs) were proposed by several researchers. Among key research issues in securing network is detecting intrusions. It helps to recognize unauthorized usage and attacks as a measure to ensure the secure the network's security. Various approaches have been proposed to determine the most effective features and hence enhance the efficiency of intrusion detection systems, the methods include, machine learning-based (ML), Bayesian based algorithm, nature inspired meta-heuristic techniques, swarm smart algorithm, and Markov neural network. Over years, the various works being carried out were evaluated on different datasets. This paper presents a thorough review on various research articles that employed single, hybrid and ensemble classification algorithms. The results metrics, shortcomings and datasets used by the studied articles in the development of IDS were compared. A future direction for potential researches is also given.
Duan, Xuanyu, Ge, Mengmeng, Minh Le, Triet Huynh, Ullah, Faheem, Gao, Shang, Lu, Xuequan, Babar, M. Ali.  2021.  Automated Security Assessment for the Internet of Things. 2021 IEEE 26th Pacific Rim International Symposium on Dependable Computing (PRDC). :47–56.
Internet of Things (IoT) based applications face an increasing number of potential security risks, which need to be systematically assessed and addressed. Expert-based manual assessment of IoT security is a predominant approach, which is usually inefficient. To address this problem, we propose an automated security assessment framework for IoT networks. Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions for predicting vulnerability metrics. The predicted metrics are then input into a two-layered graphical security model, which consists of an attack graph at the upper layer to present the network connectivity and an attack tree for each node in the network at the bottom layer to depict the vulnerability information. This security model automatically assesses the security of the IoT network by capturing potential attack paths. We evaluate the viability of our approach using a proof-of-concept smart building system model which contains a variety of real-world IoT devices and poten-tial vulnerabilities. Our evaluation of the proposed framework demonstrates its effectiveness in terms of automatically predicting the vulnerability metrics of new vulnerabilities with more than 90% accuracy, on average, and identifying the most vulnerable attack paths within an IoT network. The produced assessment results can serve as a guideline for cybersecurity professionals to take further actions and mitigate risks in a timely manner.
Ali, Wan Noor Hamiza Wan, Mohd, Masnizah, Fauzi, Fariza.  2021.  Cyberbullying Predictive Model: Implementation of Machine Learning Approach. 2021 Fifth International Conference on Information Retrieval and Knowledge Management (CAMP). :65–69.
Machine learning is implemented extensively in various applications. The machine learning algorithms teach computers to do what comes naturally to humans. The objective of this study is to do comparison on the predictive models in cyberbullying detection between the basic machine learning system and the proposed system with the involvement of feature selection technique, resampling and hyperparameter optimization by using two classifiers; Support Vector Classification Linear and Decision Tree. Corpus from ASKfm used to extract word n-grams features before implemented into eight different experiments setup. Evaluation on performance metric shows that Decision Tree gives the best performance when tested using feature selection without resampling and hyperparameter optimization involvement. This shows that the proposed system is better than the basic setting in machine learning.
2022-02-22
Torquato, Matheus, Vieira, Marco.  2021.  VM Migration Scheduling as Moving Target Defense against Memory DoS Attacks: An Empirical Study. 2021 IEEE Symposium on Computers and Communications (ISCC). :1—6.
Memory Denial of Service (DoS) attacks are easy-to-launch, hard to detect, and significantly impact their targets. In memory DoS, the attacker targets the memory of his Virtual Machine (VM) and, due to hardware isolation issues, the attack affects the co-resident VMs. Theoretically, we can deploy VM migration as Moving Target Defense (MTD) against memory DoS. However, the current literature lacks empirical evidence supporting this hypothesis. Moreover, there is a need to evaluate how the VM migration timing impacts the potential MTD protection. This practical experience report presents an experiment on VM migration-based MTD against memory DoS. We evaluate the impact of memory DoS attacks in the context of two applications running in co-hosted VMs: machine learning and OLTP. The results highlight that the memory DoS attacks lead to more than 70% reduction in the applications' performance. Nevertheless, timely VM migrations can significantly mitigate the attack effects in both considered applications.
Jenkins, Chris, Vugrin, Eric, Manickam, Indu, Troutman, Nicholas, Hazelbaker, Jacob, Krakowiak, Sarah, Maxwell, Josh, Brown, Richard.  2021.  Moving Target Defense for Space Systems. 2021 IEEE Space Computing Conference (SCC). :60—71.
Space systems provide many critical functions to the military, federal agencies, and infrastructure networks. Nation-state adversaries have shown the ability to disrupt critical infrastructure through cyber-attacks targeting systems of networked, embedded computers. Moving target defenses (MTDs) have been proposed as a means for defending various networks and systems against potential cyber-attacks. MTDs differ from many cyber resilience technologies in that they do not necessarily require detection of an attack to mitigate the threat. We devised a MTD algorithm and tested its application to a real-time network. We demonstrated MTD usage with a real-time protocol given constraints not typically found in best-effort networks. Second, we quantified the cyber resilience benefit of MTD given an exfiltration attack by an adversary. For our experiment, we employed MTD which resulted in a reduction of adversarial knowledge by 97%. Even when the adversary can detect when the address changes, there is still a reduction in adversarial knowledge when compared to static addressing schemes. Furthermore, we analyzed the core performance of the algorithm and characterized its unpredictability using nine different statistical metrics. The characterization highlighted the algorithm has good unpredictability characteristics with some opportunity for improvement to produce more randomness.