Visible to the public Biblio

Filters: Keyword is predictive security metrics  [Clear All Filters]
2022-02-24
Kroeger, Trevor, Cheng, Wei, Guilley, Sylvain, Danger, Jean-Luc, Karimi, Nazhmeh.  2021.  Making Obfuscated PUFs Secure Against Power Side-Channel Based Modeling Attacks. 2021 Design, Automation Test in Europe Conference Exhibition (DATE). :1000–1005.
To enhance the security of digital circuits, there is often a desire to dynamically generate, rather than statically store, random values used for identification and authentication purposes. Physically Unclonable Functions (PUFs) provide the means to realize this feature in an efficient and reliable way by utilizing commonly overlooked process variations that unintentionally occur during the manufacturing of integrated circuits (ICs) due to the imperfection of fabrication process. When given a challenge, PUFs produce a unique response. However, PUFs have been found to be vulnerable to modeling attacks where by using a set of collected challenge response pairs (CRPs) and training a machine learning model, the response can be predicted for unseen challenges. To combat this vulnerability, researchers have proposed techniques such as Challenge Obfuscation. However, as shown in this paper, this technique can be compromised via modeling the PUF's power side-channel. We first show the vulnerability of a state-of-the-art Challenge Obfuscated PUF (CO-PUF) against power analysis attacks by presenting our attack results on the targeted CO-PUF. Then we propose two countermeasures, as well as their hybrid version, that when applied to the CO-PUFs make them resilient against power side-channel based modeling attacks. We also provide some insights on the proper design metrics required to be taken when implementing these mitigations. Our simulation results show the high success of our attack in compromising the original Challenge Obfuscated PUFs (success rate textgreater 98%) as well as the significant improvement on resilience of the obfuscated PUFs against power side-channel based modeling when equipped with our countermeasures.
Moskal, Stephen, Yang, Shanchieh Jay.  2021.  Translating Intrusion Alerts to Cyberattack Stages Using Pseudo-Active Transfer Learning (PATRL). 2021 IEEE Conference on Communications and Network Security (CNS). :110–118.
Intrusion alerts continue to grow in volume, variety, and complexity. Its cryptic nature requires substantial time and expertise to interpret the intended consequence of observed malicious actions. To assist security analysts in effectively diagnosing what alerts mean, this work develops a novel machine learning approach that translates alert descriptions to intuitively interpretable Action-Intent-Stages (AIS) with only 1% labeled data. We combine transfer learning, active learning, and pseudo labels and develop the Pseudo-Active Transfer Learning (PATRL) process. The PATRL process begins with an unsupervised-trained language model using MITRE ATT&CK, CVE, and IDS alert descriptions. The language model feeds to an LSTM classifier to train with 1% labeled data and is further enhanced with active learning using pseudo labels predicted by the iteratively improved models. Our results suggest PATRL can predict correctly for 85% (top-1 label) and 99% (top-3 labels) of the remaining 99% unknown data. Recognizing the need to build confidence for the analysts to use the model, the system provides Monte-Carlo Dropout Uncertainty and Pseudo-Label Convergence Score for each of the predicted alerts. These metrics give the analyst insights to determine whether to directly trust the top-1 or top-3 predictions and whether additional pseudo labels are needed. Our approach overcomes a rarely tackled research problem where minimal amounts of labeled data do not reflect the truly unlabeled data's characteristics. Combining the advantages of transfer learning, active learning, and pseudo labels, the PATRL process translates the complex intrusion alert description for the analysts with confidence.
Musa, Usman Shuaibu, Chakraborty, Sudeshna, Abdullahi, Muhammad M., Maini, Tarun.  2021.  A Review on Intrusion Detection System Using Machine Learning Techniques. 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS). :541–549.
Computer networks are exposed to cyber related attacks due to the common usage of internet, as the result of such, several intrusion detection systems (IDSs) were proposed by several researchers. Among key research issues in securing network is detecting intrusions. It helps to recognize unauthorized usage and attacks as a measure to ensure the secure the network's security. Various approaches have been proposed to determine the most effective features and hence enhance the efficiency of intrusion detection systems, the methods include, machine learning-based (ML), Bayesian based algorithm, nature inspired meta-heuristic techniques, swarm smart algorithm, and Markov neural network. Over years, the various works being carried out were evaluated on different datasets. This paper presents a thorough review on various research articles that employed single, hybrid and ensemble classification algorithms. The results metrics, shortcomings and datasets used by the studied articles in the development of IDS were compared. A future direction for potential researches is also given.
Duan, Xuanyu, Ge, Mengmeng, Minh Le, Triet Huynh, Ullah, Faheem, Gao, Shang, Lu, Xuequan, Babar, M. Ali.  2021.  Automated Security Assessment for the Internet of Things. 2021 IEEE 26th Pacific Rim International Symposium on Dependable Computing (PRDC). :47–56.
Internet of Things (IoT) based applications face an increasing number of potential security risks, which need to be systematically assessed and addressed. Expert-based manual assessment of IoT security is a predominant approach, which is usually inefficient. To address this problem, we propose an automated security assessment framework for IoT networks. Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions for predicting vulnerability metrics. The predicted metrics are then input into a two-layered graphical security model, which consists of an attack graph at the upper layer to present the network connectivity and an attack tree for each node in the network at the bottom layer to depict the vulnerability information. This security model automatically assesses the security of the IoT network by capturing potential attack paths. We evaluate the viability of our approach using a proof-of-concept smart building system model which contains a variety of real-world IoT devices and poten-tial vulnerabilities. Our evaluation of the proposed framework demonstrates its effectiveness in terms of automatically predicting the vulnerability metrics of new vulnerabilities with more than 90% accuracy, on average, and identifying the most vulnerable attack paths within an IoT network. The produced assessment results can serve as a guideline for cybersecurity professionals to take further actions and mitigate risks in a timely manner.
Alabbasi, Abdulrahman, Ganjalizadeh, Milad, Vandikas, Konstantinos, Petrova, Marina.  2021.  On Cascaded Federated Learning for Multi-Tier Predictive Models. 2021 IEEE International Conference on Communications Workshops (ICC Workshops). :1–7.
The performance prediction of user equipment (UE) metrics has many applications in the 5G era and beyond. For instance, throughput prediction can improve carrier selection, adaptive video streaming's quality of experience (QoE), and traffic latency. Many studies suggest distributed learning algorithms (e.g., federated learning (FL)) for this purpose. However, in a multi-tier design, features are measured in different tiers, e.g., UE tier, and gNodeB (gNB) tier. On one hand, neglecting the measurements in one tier results in inaccurate predictions. On the other hand, transmitting the data from one tier to another improves the prediction performance at the expense of increasing network overhead and privacy risks. In this paper, we propose cascaded FL to enhance UE throughput prediction with minimum network footprint and privacy ramifications (if any). The idea is to introduce feedback to conventional FL, in multi-tier architectures. Although we use cascaded FL for UE prediction tasks, the idea is rather general and can be used for many prediction problems in multi-tier architectures, such as cellular networks. We evaluate the performance of cascaded FL by detailed and 3GPP compliant simulations of London's city center. Our simulations show that the proposed cascaded FL can achieve up to 54% improvement over conventional FL in the normalized gain, at the cost of 1.8 MB (without quantization) and no cost with quantization.
Muhati, Eric, Rawat, Danda B..  2021.  Adversarial Machine Learning for Inferring Augmented Cyber Agility Prediction. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–6.
Security analysts conduct continuous evaluations of cyber-defense tools to keep pace with advanced and persistent threats. Cyber agility has become a critical proactive security resource that makes it possible to measure defense adjustments and reactions to rising threats. Subsequently, machine learning has been applied to support cyber agility prediction as an essential effort to anticipate future security performance. Nevertheless, apt and treacherous actors motivated by economic incentives continue to prevail in circumventing machine learning-based protection tools. Adversarial learning, widely applied to computer security, especially intrusion detection, has emerged as a new area of concern for the recently recognized critical cyber agility prediction. The rationale is, if a sophisticated malicious actor obtains the cyber agility parameters, correct prediction cannot be guaranteed. Unless with a demonstration of white-box attack failures. The challenge lies in recognizing that unconstrained adversaries hold vast potential capabilities. In practice, they could have perfect-knowledge, i.e., a full understanding of the defense tool in use. We address this challenge by proposing an adversarial machine learning approach that achieves accurate cyber agility forecast through mapped nefarious influence on static defense tools metrics. Considering an adversary would aim at influencing perilous confidence in a defense tool, we demonstrate resilient cyber agility prediction through verified attack signatures in dynamic learning windows. After that, we compare cyber agility prediction under negative influence with and without our proposed dynamic learning windows. Our numerical results show the model's execution degrades without adversarial machine learning. Such a feigned measure of performance could lead to incorrect software security patching.
Ali, Wan Noor Hamiza Wan, Mohd, Masnizah, Fauzi, Fariza.  2021.  Cyberbullying Predictive Model: Implementation of Machine Learning Approach. 2021 Fifth International Conference on Information Retrieval and Knowledge Management (CAMP). :65–69.
Machine learning is implemented extensively in various applications. The machine learning algorithms teach computers to do what comes naturally to humans. The objective of this study is to do comparison on the predictive models in cyberbullying detection between the basic machine learning system and the proposed system with the involvement of feature selection technique, resampling and hyperparameter optimization by using two classifiers; Support Vector Classification Linear and Decision Tree. Corpus from ASKfm used to extract word n-grams features before implemented into eight different experiments setup. Evaluation on performance metric shows that Decision Tree gives the best performance when tested using feature selection without resampling and hyperparameter optimization involvement. This shows that the proposed system is better than the basic setting in machine learning.
Ramirez-Gonzalez, M., Segundo Sevilla, F. R., Korba, P..  2021.  Convolutional Neural Network Based Approach for Static Security Assessment of Power Systems. 2021 World Automation Congress (WAC). :106–110.
Steady-state response of the grid under a predefined set of credible contingencies is an important component of power system security assessment. With the growing complexity of electrical networks, fast and reliable methods and tools are required to effectively assist transmission grid operators in making decisions concerning system security procurement. In this regard, a Convolutional Neural Network (CNN) based approach to develop prediction models for static security assessment under N-1 contingency is investigated in this paper. The CNN model is trained and applied to classify the security status of a sample system according to given node voltage magnitudes, and active and reactive power injections at network buses. Considering a set of performance metrics, the superior performance of the CNN alternative is demonstrated by comparing the obtained results with a support vector machine classifier algorithm.
Zhou, Andy, Sultana, Kazi Zakia, Samanthula, Bharath K..  2021.  Investigating the Changes in Software Metrics after Vulnerability Is Fixed. 2021 IEEE International Conference on Big Data (Big Data). :5658–5663.
Preventing software vulnerabilities while writing code is one of the most effective ways for avoiding cyber attacks on any developed system. Although developers follow some standard guiding principles for ensuring secure code, the code can still have security bottlenecks and be compromised by an attacker. Therefore, assessing software security while developing code can help developers in writing vulnerability free code. Researchers have already focused on metrics-based and text mining based software vulnerability prediction models. The metrics based models showed higher precision in predicting vulnerabilities although the recall rate is low. In addition, current research did not investigate the impact of individual software metric on the occurrences of vulnerabilities. The main objective of this paper is to track the changes in every software metric after the developer fixes a particular vulnerability. The results of our research will potentially motivate further research on building more accurate vulnerability prediction models based on the appropriate software metrics. In particular, we have compared a total of 250 files from Apache Tomcat and Apache CXF. These files were extracted from the Apache database and were chosen because Apache released these files as vulnerable in their publicly available security advisories. Using a static analysis tool, metrics of the targeted vulnerable files and relevant fixed files (files where vulnerable code is removed by the developers) were extracted and compared. We show that eight of the 40 metrics have an average increase of 2% from vulnerable to fixed files. These metrics include CountDeclClass, CountDeclClassMethod, CountDeclClassVariable, CountDeclInstanceVariable, CountDeclMethodDefault, CountLineCode, MaxCyclomaticStrict, MaxNesting. This study will help developers to assess software security through utilizing software metrics in secure coding practices.
2021-10-12
Radhakrishnan, C., Karthick, K., Asokan, R..  2020.  Ensemble Learning Based Network Anomaly Detection Using Clustered Generalization of the Features. 2020 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). :157–162.
Due to the extraordinary volume of business information, classy cyber-attacks pointing the networks of all enterprise have become more casual, with intruders trying to pierce vast into and grasp broader from the compromised network machines. The vital security essential is that field experts and the network administrators have a common terminology to share the attempt of intruders to invoke the system and to rapidly assist each other retort to all kind of threats. Given the enormous huge system traffic, traditional Machine Learning (ML) algorithms will provide ineffective predictions of the network anomaly. Thereby, a hybridized multi-model system can improve the accuracy of detecting the intrusion in the networks. In this manner, this article presents a novel approach Clustered Generalization oriented Ensemble Learning Model (CGELM) for predicting the network anomaly. The performance metrics of the anticipated approach are Detection Rate (DR) and False Predictive Rate (FPR) for the two heterogeneous data sets namely NSL-KDD and UGR'16. The proposed method provides 98.93% accuracy for DR and 0.14% of FPR against Decision Stump AdaBoost and Stacking Ensemble methods.
Zhao, Haojun, Lin, Yun, Gao, Song, Yu, Shui.  2020.  Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–5.
The discovery of adversarial examples poses a serious risk to the deep neural networks (DNN). By adding a subtle perturbation that is imperceptible to the human eye, a well-behaved DNN model can be easily fooled and completely change the prediction categories of the input samples. However, research on adversarial attacks in the field of modulation recognition mainly focuses on increasing the prediction error of the classifier, while ignores the importance of decreasing the perceptual invisibility of attack. Aiming at the task of DNNbased modulation recognition, this study designs the Fitting Difference as a metric to measure the perturbed waveforms and proposes a new method: the Nesterov Adam Iterative Method to generate adversarial examples. We show that the proposed algorithm not only exerts excellent white-box attacks but also can initiate attacks on a black-box model. Moreover, our method decreases the waveform perceptual invisibility of attacks to a certain degree, thereby reducing the risk of an attack being detected.
Zhong, Zhenyu, Hu, Zhisheng, Chen, Xiaowei.  2020.  Quantifying DNN Model Robustness to the Real-World Threats. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :150–157.
DNN models have suffered from adversarial example attacks, which lead to inconsistent prediction results. As opposed to the gradient-based attack, which assumes white-box access to the model by the attacker, we focus on more realistic input perturbations from the real-world and their actual impact on the model robustness without any presence of the attackers. In this work, we promote a standardized framework to quantify the robustness against real-world threats. It is composed of a set of safety properties associated with common violations, a group of metrics to measure the minimal perturbation that causes the offense, and various criteria that reflect different aspects of the model robustness. By revealing comparison results through this framework among 13 pre-trained ImageNet classifiers, three state-of-the-art object detectors, and three cloud-based content moderators, we deliver the status quo of the real-world model robustness. Beyond that, we provide robustness benchmarking datasets for the community.
Deng, Perry, Linsky, Cooper, Wright, Matthew.  2020.  Weaponizing Unicodes with Deep Learning -Identifying Homoglyphs with Weakly Labeled Data. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1–6.
Visually similar characters, or homoglyphs, can be used to perform social engineering attacks or to evade spam and plagiarism detectors. It is thus important to understand the capabilities of an attacker to identify homoglyphs - particularly ones that have not been previously spotted - and leverage them in attacks. We investigate a deep-learning model using embedding learning, transfer learning, and augmentation to determine the visual similarity of characters and thereby identify potential homoglyphs. Our approach uniquely takes advantage of weak labels that arise from the fact that most characters are not homoglyphs. Our model drastically outperforms the Normal-ized Compression Distance approach on pairwise homoglyph identification, for which we achieve an average precision of 0.97. We also present the first attempt at clustering homoglyphs into sets of equivalence classes, which is more efficient than pairwise information for security practitioners to quickly lookup homoglyphs or to normalize confusable string encodings. To measure clustering performance, we propose a metric (mBIOU) building on the classic Intersection-Over-Union (IOU) metric. Our clustering method achieves 0.592 mBIOU, compared to 0.430 for the naive baseline. We also use our model to predict over 8,000 previously unknown homoglyphs, and find good early indications that many of these may be true positives. Source code and list of predicted homoglyphs are uploaded to Github: https://github.com/PerryXDeng/weaponizing\_unicode.
Chen, Jianbo, Jordan, Michael I., Wainwright, Martin J..  2020.  HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. 2020 IEEE Symposium on Security and Privacy (SP). :1277–1294.
The goal of a decision-based adversarial attack on a trained model is to generate adversarial examples based solely on observing output labels returned by the targeted model. We develop HopSkipJumpAttack, a family of algorithms based on a novel estimate of the gradient direction using binary information at the decision boundary. The proposed family includes both untargeted and targeted attacks optimized for $\mathscrl$ and $\mathscrlınfty$ similarity metrics respectively. Theoretical analysis is provided for the proposed algorithms and the gradient direction estimate. Experiments show HopSkipJumpAttack requires significantly fewer model queries than several state-of-the-art decision-based adversarial attacks. It also achieves competitive performance in attacking several widely-used defense mechanisms.
Sultana, Kazi Zakia, Codabux, Zadia, Williams, Byron.  2020.  Examining the Relationship of Code and Architectural Smells with Software Vulnerabilities. 2020 27th Asia-Pacific Software Engineering Conference (APSEC). :31–40.
Context: Security is vital to software developed for commercial or personal use. Although more organizations are realizing the importance of applying secure coding practices, in many of them, security concerns are not known or addressed until a security failure occurs. The root cause of security failures is vulnerable code. While metrics have been used to predict software vulnerabilities, we explore the relationship between code and architectural smells with security weaknesses. As smells are surface indicators of a deeper problem in software, determining the relationship between smells and software vulnerabilities can play a significant role in vulnerability prediction models. Objective: This study explores the relationship between smells and software vulnerabilities to identify the smells. Method: We extracted the class, method, file, and package level smells for three systems: Apache Tomcat, Apache CXF, and Android. We then compared their occurrences in the vulnerable classes which were reported to contain vulnerable code and in the neutral classes (non-vulnerable classes where no vulnerability had yet been reported). Results: We found that a vulnerable class is more likely to have certain smells compared to a non-vulnerable class. God Class, Complex Class, Large Class, Data Class, Feature Envy, Brain Class have a statistically significant relationship with software vulnerabilities. We found no significant relationship between architectural smells and software vulnerabilities. Conclusion: We can conclude that for all the systems examined, there is a statistically significant correlation between software vulnerabilities and some smells.
Ivaki, Naghmeh, Antunes, Nuno.  2020.  SIDE: Security-Aware Integrated Development Environment. 2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). :149–150.
An effective way for building secure software is to embed security into software in the early stages of software development. Thus, we aim to study several evidences of code anomalies introduced during the software development phase, that may be indicators of security issues in software, such as code smells, structural complexity represented by diverse software metrics, the issues detected by static code analysers, and finally missing security best practices. To use such evidences for vulnerability prediction and removal, we first need to understand how they are correlated with security issues. Then, we need to discover how these imperfect raw data can be integrated to achieve a reliable, accurate and valuable decision about a portion of code. Finally, we need to construct a security actuator providing suggestions to the developers to remove or fix the detected issues from the code. All of these will lead to the construction of a framework, including security monitoring, security analyzer, and security actuator platforms, that are necessary for a security-aware integrated development environment (SIDE).
Franchina, L., Socal, A..  2020.  Innovative Predictive Model for Smart City Security Risk Assessment. 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO). :1831–1836.
In a Smart City, new technologies such as big data analytics, data fusion and artificial intelligence will increase awareness by measuring many phenomena and storing a huge amount of data. 5G will allow communication of these data among different infrastructures instantaneously. In a Smart City, security aspects are going to be a major concern. Some drawbacks, such as vulnerabilities of a highly integrated system and information overload, must be considered. To overcome these downsides, an innovative predictive model for Smart City security risk assessment has been developed. Risk metrics and indicators are defined by considering data coming from a wide range of sensors. An innovative ``what if'' algorithm is introduced to identify critical infrastructures functional relationship. Therefore, it is possible to evaluate the effects of an incident that involves one infrastructure over the others.
2021-08-17
Alenezi, Freeh, Tsokos, Chris P..  2020.  Machine Learning Approach to Predict Computer Operating Systems Vulnerabilities. 2020 3rd International Conference on Computer Applications Information Security (ICCAIS). :1—6.
Information security is everyone's concern. Computer systems are used to store sensitive data. Any weakness in their reliability and security makes them vulnerable. The Common Vulnerability Scoring System (CVSS) is a commonly used scoring system, which helps in knowing the severity of a software vulnerability. In this research, we show the effectiveness of common machine learning algorithms in predicting the computer operating systems security using the published vulnerability data in Common Vulnerabilities and Exposures and National Vulnerability Database repositories. The Random Forest algorithm has the best performance, compared to other algorithms, in predicting the computer operating system vulnerability severity levels based on precision, recall, and F-measure evaluation metrics. In addition, a predictive model was developed to predict whether a newly discovered computer operating system vulnerability would allow attackers to cause denial of service to the subject system.
2021-06-28
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Mehmet Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
2021-03-01
Kuppa, A., Le-Khac, N.-A..  2020.  Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.

2020-04-03
Perveen, Abida, Patwary, Mohammad, Aneiba, Adel.  2019.  Dynamically Reconfigurable Slice Allocation and Admission Control within 5G Wireless Networks. 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring). :1—7.
Serving heterogeneous traffic demand requires efficient resource utilization to deliver the promises of 5G wireless network towards enhanced mobile broadband, massive machine type communication and ultra-reliable low-latency communication. In this paper, an integrated user application-specific demand characteristics as well as network characteristics evaluation based online slice allocation model for 5G wireless network is proposed. Such characteristics include, available bandwidth, power, quality of service demand, service priority, security sensitivity, network load, predictive load etc. A degree of intra-slice resource sharing elasticity has been considered based on their availability. The availability has been assessed based on the current availability as well as forecasted availability. On the basis of application characteristics, an admission control strategy has been proposed. An interactive AMF (Access and Mobility Function)- RAN (Radio Access Network) information exchange has been assumed. A cost function has been derived to quantify resource allocation decision metric that is valid for both static and dynamic nature of user and network characteristics. A dynamic intra-slice decision boundary estimation model has been proposed. A set of analytical comparative results have been attained in comparison to the results available in the literature. The results suggest the proposed resource allocation framework performance is superior to the existing results in the context of network utility, mean delay and network grade of service, while providing similar throughput. The superiority reported is due to soft nature of the decision metric while reconfiguring slice resource block-size and boundaries.
Sattar, Naw Safrin, Arifuzzaman, Shaikh, Zibran, Minhaz F., Sakib, Md Mohiuddin.  2019.  An Ensemble Approach for Suspicious Traffic Detection from High Recall Network Alerts. {2019 IEEE International Conference on Big Data (Big Data. :4299—4308}}@inproceedings{wu_ensemble_2019.
Web services from large-scale systems are prevalent all over the world. However, these systems are naturally vulnerable and incline to be intruded by adversaries for illegal benefits. To detect anomalous events, previous works focus on inspecting raw system logs by identifying the outliers in workflows or relying on machine learning methods. Though those works successfully identify the anomalies, their models use large training set and process whole system logs. To reduce the quantity of logs that need to be processed, high recall suspicious network alert systems can be applied to preprocess system logs. Only the logs that trigger alerts are retrieved for further usage. Due to the universally usage of network traffic alerts among Security Operations Center, anomalies detection problems could be transformed to classify truly suspicious network traffic alerts from false alerts.In this work, we propose an ensemble model to distinguish truly suspicious alerts from false alerts. Our model consists of two sub-models with different feature extraction strategies to ensure the diversity and generalization. We use decision tree based boosters and deep neural networks to build ensemble models for classification. Finally, we evaluate our approach on suspicious network alerts dataset provided by 2019 IEEE BigData Cup: Suspicious Network Event Recognition. Under the metric of AUC scores, our model achieves 0.9068 on the whole testing set.
Saridou, Betty, Shiaeles, Stavros, Papadopoulos, Basil.  2019.  DDoS Attack Mitigation through Root-DNS Server: A Case Study. 2019 IEEE World Congress on Services (SERVICES). 2642-939X:60—65.

Load balancing and IP anycast are traffic routing algorithms used to speed up delivery of the Domain Name System. In case of a DDoS attack or an overload condition, the value of these protocols is critical, as they can provide intrinsic DDoS mitigation with the failover alternatives. In this paper, we present a methodology for predicting the next DNS response in the light of a potential redirection to less busy servers, in order to mitigate the size of the attack. Our experiments were conducted using data from the Nov. 2015 attack of the Root DNS servers and Logistic Regression, k-Nearest Neighbors, Support Vector Machines and Random Forest as our primary classifiers. The models were able to successfully predict up to 83% of responses for Root Letters that operated on a small number of sites and consequently suffered the most during the attacks. On the other hand, regarding DNS requests coming from more distributed Root servers, the models demonstrated lower accuracy. Our analysis showed a correlation between the True Positive Rate metric and the number of sites, as well as a clear need for intelligent management of traffic in load balancing practices.

Jabeen, Gul, Ping, Luo.  2019.  A Unified Measurable Software Trustworthy Model Based on Vulnerability Loss Speed Index. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :18—25.

As trust becomes increasingly important in the software domain. Due to its complex composite concept, people face great challenges, especially in today's dynamic and constantly changing internet technology. In addition, measuring the software trustworthiness correctly and effectively plays a significant role in gaining users trust in choosing different software. In the context of security, trust is previously measured based on the vulnerability time occurrence to predict the total number of vulnerabilities or their future occurrence time. In this study, we proposed a new unified index called "loss speed index" that integrates the most important variables of software security such as vulnerability occurrence time, number and severity loss, which are used to evaluate the overall software trust measurement. Based on this new definition, a new model called software trustworthy security growth model (STSGM) has been proposed. This paper also aims at filling the gap by addressing the severity of vulnerabilities and proposed a vulnerability severity prediction model, the results are further evaluated by STSGM to estimate the future loss speed index. Our work has several features such as: (1) It is used to predict the vulnerability severity/type in future, (2) Unlike traditional evaluation methods like expert scoring, our model uses historical data to predict the future loss speed of software, (3) The loss metric value is used to evaluate the risk associated with different software, which has a direct impact on software trustworthiness. Experiments performed on real software vulnerability datasets and its results are analyzed to check the correctness and effectiveness of the proposed model.

Calvert, Chad L., Khoshgoftaar, Taghi M..  2019.  Threshold Based Optimization of Performance Metrics with Severely Imbalanced Big Security Data. 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI). :1328—1334.

Proper evaluation of classifier predictive models requires the selection of appropriate metrics to gauge the effectiveness of a model's performance. The Area Under the Receiver Operating Characteristic Curve (AUC) has become the de facto standard metric for evaluating this classifier performance. However, recent studies have suggested that AUC is not necessarily the best metric for all types of datasets, especially those in which there exists a high or severe level of class imbalance. There is a need to assess which specific metrics are most beneficial to evaluate the performance of highly imbalanced big data. In this work, we evaluate the performance of eight machine learning techniques on a severely imbalanced big dataset pertaining to the cyber security domain. We analyze the behavior of six different metrics to determine which provides the best representation of a model's predictive performance. We also evaluate the impact that adjusting the classification threshold has on our metrics. Our results find that the C4.5N decision tree is the optimal learner when evaluating all presented metrics for severely imbalanced Slow HTTP DoS attack data. Based on our results, we propose that the use of AUC alone as a primary metric for evaluating highly imbalanced big data may be ineffective, and the evaluation of metrics such as F-measure and Geometric mean can offer substantial insight into the true performance of a given model.