Kroeger, Trevor, Cheng, Wei, Guilley, Sylvain, Danger, Jean-Luc, Karimi, Nazhmeh.
2021.
Making Obfuscated PUFs Secure Against Power Side-Channel Based Modeling Attacks. 2021 Design, Automation Test in Europe Conference Exhibition (DATE). :1000–1005.
To enhance the security of digital circuits, there is often a desire to dynamically generate, rather than statically store, random values used for identification and authentication purposes. Physically Unclonable Functions (PUFs) provide the means to realize this feature in an efficient and reliable way by utilizing commonly overlooked process variations that unintentionally occur during the manufacturing of integrated circuits (ICs) due to the imperfection of fabrication process. When given a challenge, PUFs produce a unique response. However, PUFs have been found to be vulnerable to modeling attacks where by using a set of collected challenge response pairs (CRPs) and training a machine learning model, the response can be predicted for unseen challenges. To combat this vulnerability, researchers have proposed techniques such as Challenge Obfuscation. However, as shown in this paper, this technique can be compromised via modeling the PUF's power side-channel. We first show the vulnerability of a state-of-the-art Challenge Obfuscated PUF (CO-PUF) against power analysis attacks by presenting our attack results on the targeted CO-PUF. Then we propose two countermeasures, as well as their hybrid version, that when applied to the CO-PUFs make them resilient against power side-channel based modeling attacks. We also provide some insights on the proper design metrics required to be taken when implementing these mitigations. Our simulation results show the high success of our attack in compromising the original Challenge Obfuscated PUFs (success rate textgreater 98%) as well as the significant improvement on resilience of the obfuscated PUFs against power side-channel based modeling when equipped with our countermeasures.
Moskal, Stephen, Yang, Shanchieh Jay.
2021.
Translating Intrusion Alerts to Cyberattack Stages Using Pseudo-Active Transfer Learning (PATRL). 2021 IEEE Conference on Communications and Network Security (CNS). :110–118.
Intrusion alerts continue to grow in volume, variety, and complexity. Its cryptic nature requires substantial time and expertise to interpret the intended consequence of observed malicious actions. To assist security analysts in effectively diagnosing what alerts mean, this work develops a novel machine learning approach that translates alert descriptions to intuitively interpretable Action-Intent-Stages (AIS) with only 1% labeled data. We combine transfer learning, active learning, and pseudo labels and develop the Pseudo-Active Transfer Learning (PATRL) process. The PATRL process begins with an unsupervised-trained language model using MITRE ATT&CK, CVE, and IDS alert descriptions. The language model feeds to an LSTM classifier to train with 1% labeled data and is further enhanced with active learning using pseudo labels predicted by the iteratively improved models. Our results suggest PATRL can predict correctly for 85% (top-1 label) and 99% (top-3 labels) of the remaining 99% unknown data. Recognizing the need to build confidence for the analysts to use the model, the system provides Monte-Carlo Dropout Uncertainty and Pseudo-Label Convergence Score for each of the predicted alerts. These metrics give the analyst insights to determine whether to directly trust the top-1 or top-3 predictions and whether additional pseudo labels are needed. Our approach overcomes a rarely tackled research problem where minimal amounts of labeled data do not reflect the truly unlabeled data's characteristics. Combining the advantages of transfer learning, active learning, and pseudo labels, the PATRL process translates the complex intrusion alert description for the analysts with confidence.
Musa, Usman Shuaibu, Chakraborty, Sudeshna, Abdullahi, Muhammad M., Maini, Tarun.
2021.
A Review on Intrusion Detection System Using Machine Learning Techniques. 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). :541–549.
Computer networks are exposed to cyber related attacks due to the common usage of internet, as the result of such, several intrusion detection systems (IDSs) were proposed by several researchers. Among key research issues in securing network is detecting intrusions. It helps to recognize unauthorized usage and attacks as a measure to ensure the secure the network's security. Various approaches have been proposed to determine the most effective features and hence enhance the efficiency of intrusion detection systems, the methods include, machine learning-based (ML), Bayesian based algorithm, nature inspired meta-heuristic techniques, swarm smart algorithm, and Markov neural network. Over years, the various works being carried out were evaluated on different datasets. This paper presents a thorough review on various research articles that employed single, hybrid and ensemble classification algorithms. The results metrics, shortcomings and datasets used by the studied articles in the development of IDS were compared. A future direction for potential researches is also given.
Duan, Xuanyu, Ge, Mengmeng, Minh Le, Triet Huynh, Ullah, Faheem, Gao, Shang, Lu, Xuequan, Babar, M. Ali.
2021.
Automated Security Assessment for the Internet of Things. 2021 IEEE 26th Pacific Rim International Symposium on Dependable Computing (PRDC). :47–56.
Internet of Things (IoT) based applications face an increasing number of potential security risks, which need to be systematically assessed and addressed. Expert-based manual assessment of IoT security is a predominant approach, which is usually inefficient. To address this problem, we propose an automated security assessment framework for IoT networks. Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions for predicting vulnerability metrics. The predicted metrics are then input into a two-layered graphical security model, which consists of an attack graph at the upper layer to present the network connectivity and an attack tree for each node in the network at the bottom layer to depict the vulnerability information. This security model automatically assesses the security of the IoT network by capturing potential attack paths. We evaluate the viability of our approach using a proof-of-concept smart building system model which contains a variety of real-world IoT devices and poten-tial vulnerabilities. Our evaluation of the proposed framework demonstrates its effectiveness in terms of automatically predicting the vulnerability metrics of new vulnerabilities with more than 90% accuracy, on average, and identifying the most vulnerable attack paths within an IoT network. The produced assessment results can serve as a guideline for cybersecurity professionals to take further actions and mitigate risks in a timely manner.
Alabbasi, Abdulrahman, Ganjalizadeh, Milad, Vandikas, Konstantinos, Petrova, Marina.
2021.
On Cascaded Federated Learning for Multi-Tier Predictive Models. 2021 IEEE International Conference on Communications Workshops (ICC Workshops). :1–7.
The performance prediction of user equipment (UE) metrics has many applications in the 5G era and beyond. For instance, throughput prediction can improve carrier selection, adaptive video streaming's quality of experience (QoE), and traffic latency. Many studies suggest distributed learning algorithms (e.g., federated learning (FL)) for this purpose. However, in a multi-tier design, features are measured in different tiers, e.g., UE tier, and gNodeB (gNB) tier. On one hand, neglecting the measurements in one tier results in inaccurate predictions. On the other hand, transmitting the data from one tier to another improves the prediction performance at the expense of increasing network overhead and privacy risks. In this paper, we propose cascaded FL to enhance UE throughput prediction with minimum network footprint and privacy ramifications (if any). The idea is to introduce feedback to conventional FL, in multi-tier architectures. Although we use cascaded FL for UE prediction tasks, the idea is rather general and can be used for many prediction problems in multi-tier architectures, such as cellular networks. We evaluate the performance of cascaded FL by detailed and 3GPP compliant simulations of London's city center. Our simulations show that the proposed cascaded FL can achieve up to 54% improvement over conventional FL in the normalized gain, at the cost of 1.8 MB (without quantization) and no cost with quantization.
Muhati, Eric, Rawat, Danda B..
2021.
Adversarial Machine Learning for Inferring Augmented Cyber Agility Prediction. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–6.
Security analysts conduct continuous evaluations of cyber-defense tools to keep pace with advanced and persistent threats. Cyber agility has become a critical proactive security resource that makes it possible to measure defense adjustments and reactions to rising threats. Subsequently, machine learning has been applied to support cyber agility prediction as an essential effort to anticipate future security performance. Nevertheless, apt and treacherous actors motivated by economic incentives continue to prevail in circumventing machine learning-based protection tools. Adversarial learning, widely applied to computer security, especially intrusion detection, has emerged as a new area of concern for the recently recognized critical cyber agility prediction. The rationale is, if a sophisticated malicious actor obtains the cyber agility parameters, correct prediction cannot be guaranteed. Unless with a demonstration of white-box attack failures. The challenge lies in recognizing that unconstrained adversaries hold vast potential capabilities. In practice, they could have perfect-knowledge, i.e., a full understanding of the defense tool in use. We address this challenge by proposing an adversarial machine learning approach that achieves accurate cyber agility forecast through mapped nefarious influence on static defense tools metrics. Considering an adversary would aim at influencing perilous confidence in a defense tool, we demonstrate resilient cyber agility prediction through verified attack signatures in dynamic learning windows. After that, we compare cyber agility prediction under negative influence with and without our proposed dynamic learning windows. Our numerical results show the model's execution degrades without adversarial machine learning. Such a feigned measure of performance could lead to incorrect software security patching.
Ali, Wan Noor Hamiza Wan, Mohd, Masnizah, Fauzi, Fariza.
2021.
Cyberbullying Predictive Model: Implementation of Machine Learning Approach. 2021 Fifth International Conference on Information Retrieval and Knowledge Management (CAMP). :65–69.
Machine learning is implemented extensively in various applications. The machine learning algorithms teach computers to do what comes naturally to humans. The objective of this study is to do comparison on the predictive models in cyberbullying detection between the basic machine learning system and the proposed system with the involvement of feature selection technique, resampling and hyperparameter optimization by using two classifiers; Support Vector Classification Linear and Decision Tree. Corpus from ASKfm used to extract word n-grams features before implemented into eight different experiments setup. Evaluation on performance metric shows that Decision Tree gives the best performance when tested using feature selection without resampling and hyperparameter optimization involvement. This shows that the proposed system is better than the basic setting in machine learning.
Ramirez-Gonzalez, M., Segundo Sevilla, F. R., Korba, P..
2021.
Convolutional Neural Network Based Approach for Static Security Assessment of Power Systems. 2021 World Automation Congress (WAC). :106–110.
Steady-state response of the grid under a predefined set of credible contingencies is an important component of power system security assessment. With the growing complexity of electrical networks, fast and reliable methods and tools are required to effectively assist transmission grid operators in making decisions concerning system security procurement. In this regard, a Convolutional Neural Network (CNN) based approach to develop prediction models for static security assessment under N-1 contingency is investigated in this paper. The CNN model is trained and applied to classify the security status of a sample system according to given node voltage magnitudes, and active and reactive power injections at network buses. Considering a set of performance metrics, the superior performance of the CNN alternative is demonstrated by comparing the obtained results with a support vector machine classifier algorithm.
Zhou, Andy, Sultana, Kazi Zakia, Samanthula, Bharath K..
2021.
Investigating the Changes in Software Metrics after Vulnerability Is Fixed. 2021 IEEE International Conference on Big Data (Big Data). :5658–5663.
Preventing software vulnerabilities while writing code is one of the most effective ways for avoiding cyber attacks on any developed system. Although developers follow some standard guiding principles for ensuring secure code, the code can still have security bottlenecks and be compromised by an attacker. Therefore, assessing software security while developing code can help developers in writing vulnerability free code. Researchers have already focused on metrics-based and text mining based software vulnerability prediction models. The metrics based models showed higher precision in predicting vulnerabilities although the recall rate is low. In addition, current research did not investigate the impact of individual software metric on the occurrences of vulnerabilities. The main objective of this paper is to track the changes in every software metric after the developer fixes a particular vulnerability. The results of our research will potentially motivate further research on building more accurate vulnerability prediction models based on the appropriate software metrics. In particular, we have compared a total of 250 files from Apache Tomcat and Apache CXF. These files were extracted from the Apache database and were chosen because Apache released these files as vulnerable in their publicly available security advisories. Using a static analysis tool, metrics of the targeted vulnerable files and relevant fixed files (files where vulnerable code is removed by the developers) were extracted and compared. We show that eight of the 40 metrics have an average increase of 2% from vulnerable to fixed files. These metrics include CountDeclClass, CountDeclClassMethod, CountDeclClassVariable, CountDeclInstanceVariable, CountDeclMethodDefault, CountLineCode, MaxCyclomaticStrict, MaxNesting. This study will help developers to assess software security through utilizing software metrics in secure coding practices.