Muhati, Eric, Rawat, Danda B..
2021.
Adversarial Machine Learning for Inferring Augmented Cyber Agility Prediction. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–6.
Security analysts conduct continuous evaluations of cyber-defense tools to keep pace with advanced and persistent threats. Cyber agility has become a critical proactive security resource that makes it possible to measure defense adjustments and reactions to rising threats. Subsequently, machine learning has been applied to support cyber agility prediction as an essential effort to anticipate future security performance. Nevertheless, apt and treacherous actors motivated by economic incentives continue to prevail in circumventing machine learning-based protection tools. Adversarial learning, widely applied to computer security, especially intrusion detection, has emerged as a new area of concern for the recently recognized critical cyber agility prediction. The rationale is, if a sophisticated malicious actor obtains the cyber agility parameters, correct prediction cannot be guaranteed. Unless with a demonstration of white-box attack failures. The challenge lies in recognizing that unconstrained adversaries hold vast potential capabilities. In practice, they could have perfect-knowledge, i.e., a full understanding of the defense tool in use. We address this challenge by proposing an adversarial machine learning approach that achieves accurate cyber agility forecast through mapped nefarious influence on static defense tools metrics. Considering an adversary would aim at influencing perilous confidence in a defense tool, we demonstrate resilient cyber agility prediction through verified attack signatures in dynamic learning windows. After that, we compare cyber agility prediction under negative influence with and without our proposed dynamic learning windows. Our numerical results show the model's execution degrades without adversarial machine learning. Such a feigned measure of performance could lead to incorrect software security patching.
Ali, Wan Noor Hamiza Wan, Mohd, Masnizah, Fauzi, Fariza.
2021.
Cyberbullying Predictive Model: Implementation of Machine Learning Approach. 2021 Fifth International Conference on Information Retrieval and Knowledge Management (CAMP). :65–69.
Machine learning is implemented extensively in various applications. The machine learning algorithms teach computers to do what comes naturally to humans. The objective of this study is to do comparison on the predictive models in cyberbullying detection between the basic machine learning system and the proposed system with the involvement of feature selection technique, resampling and hyperparameter optimization by using two classifiers; Support Vector Classification Linear and Decision Tree. Corpus from ASKfm used to extract word n-grams features before implemented into eight different experiments setup. Evaluation on performance metric shows that Decision Tree gives the best performance when tested using feature selection without resampling and hyperparameter optimization involvement. This shows that the proposed system is better than the basic setting in machine learning.
Ramirez-Gonzalez, M., Segundo Sevilla, F. R., Korba, P..
2021.
Convolutional Neural Network Based Approach for Static Security Assessment of Power Systems. 2021 World Automation Congress (WAC). :106–110.
Steady-state response of the grid under a predefined set of credible contingencies is an important component of power system security assessment. With the growing complexity of electrical networks, fast and reliable methods and tools are required to effectively assist transmission grid operators in making decisions concerning system security procurement. In this regard, a Convolutional Neural Network (CNN) based approach to develop prediction models for static security assessment under N-1 contingency is investigated in this paper. The CNN model is trained and applied to classify the security status of a sample system according to given node voltage magnitudes, and active and reactive power injections at network buses. Considering a set of performance metrics, the superior performance of the CNN alternative is demonstrated by comparing the obtained results with a support vector machine classifier algorithm.
Zhou, Andy, Sultana, Kazi Zakia, Samanthula, Bharath K..
2021.
Investigating the Changes in Software Metrics after Vulnerability Is Fixed. 2021 IEEE International Conference on Big Data (Big Data). :5658–5663.
Preventing software vulnerabilities while writing code is one of the most effective ways for avoiding cyber attacks on any developed system. Although developers follow some standard guiding principles for ensuring secure code, the code can still have security bottlenecks and be compromised by an attacker. Therefore, assessing software security while developing code can help developers in writing vulnerability free code. Researchers have already focused on metrics-based and text mining based software vulnerability prediction models. The metrics based models showed higher precision in predicting vulnerabilities although the recall rate is low. In addition, current research did not investigate the impact of individual software metric on the occurrences of vulnerabilities. The main objective of this paper is to track the changes in every software metric after the developer fixes a particular vulnerability. The results of our research will potentially motivate further research on building more accurate vulnerability prediction models based on the appropriate software metrics. In particular, we have compared a total of 250 files from Apache Tomcat and Apache CXF. These files were extracted from the Apache database and were chosen because Apache released these files as vulnerable in their publicly available security advisories. Using a static analysis tool, metrics of the targeted vulnerable files and relevant fixed files (files where vulnerable code is removed by the developers) were extracted and compared. We show that eight of the 40 metrics have an average increase of 2% from vulnerable to fixed files. These metrics include CountDeclClass, CountDeclClassMethod, CountDeclClassVariable, CountDeclInstanceVariable, CountDeclMethodDefault, CountLineCode, MaxCyclomaticStrict, MaxNesting. This study will help developers to assess software security through utilizing software metrics in secure coding practices.
Barthe, Gilles, Blazy, Sandrine, Hutin, Rémi, Pichardie, David.
2021.
Secure Compilation of Constant-Resource Programs. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1–12.
Observational non-interference (ONI) is a generic information-flow policy for side-channel leakage. Informally, a program is ONI-secure if observing program leakage during execution does not reveal any information about secrets. Formally, ONI is parametrized by a leakage function l, and different instances of ONI can be recovered through different instantiations of l. One popular instance of ONI is the cryptographic constant-time (CCT) policy, which is widely used in cryptographic libraries to protect against timing and cache attacks. Informally, a program is CCT-secure if it does not branch on secrets and does not perform secret-dependent memory accesses. Another instance of ONI is the constant-resource (CR) policy, a relaxation of the CCT policy which is used in Amazon's s2n implementation of TLS and in several other security applications. Informally, a program is CR-secure if its cost (modelled by a tick operator over an arbitrary semi-group) does not depend on secrets.In this paper, we consider the problem of preserving ONI by compilation. Prior work on the preservation of the CCT policy develops proof techniques for showing that main compiler optimisations preserve the CCT policy. However, these proof techniques critically rely on the fact that the semi-group used for modelling leakage satisfies the property: l1+ l1' = l2+l2'$\Rightarrow$l1=l2$\wedge$ l1' = l2' Unfortunately, this non-cancelling property fails for the CR policy, because its underlying semi-group is ($\backslash$mathbbN, +) and it is currently not known how to extend existing techniques to policies that do not satisfy non-cancellation.We propose a methodology for proving the preservation of the CR policy during a program transformation. We present an implementation of some elementary compiler passes, and apply the methodology to prove the preservation of these passes. Our results have been mechanically verified using the Coq proof assistant.