Visible to the public Biblio

Filters: Author is Muhati, Eric  [Clear All Filters]
2022-03-15
Naik Sapavath, Naveen, Muhati, Eric, Rawat, Danda B..  2021.  Prediction and Detection of Cyberattacks using AI Model in Virtualized Wireless Networks. 2021 8th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2021 7th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :97—102.
Securing communication between any two wireless devices or users is challenging without compromising sensitive/personal data. To address this problem, we have developed an artificial intelligence (AI) algorithm to secure communication on virtualized wireless networks. To detect cyberattacks in a virtualized environment is challenging compared to traditional wireless networks setting. However, we successfully investigate an efficient cyberattack detection algorithm using an AI algorithm in a Bayesian learning model for detecting cyberattacks on the fly. We have studied the results of Random Forest and deep neural network (DNN) models to detect the cyberattacks on a virtualized wireless network, having considered the required transmission power as a threshold value to classify suspicious activities in our model. We present both formal mathematical analysis and numerical results to support our claims. The numerical results show our accuracy in detecting cyberattacks in the proposed Bayesian model is better than Random Forest and DNN models. We have also compared both models in terms of detection errors. The performance comparison results show our proposed approach outperforms existing approaches in detection accuracy, precision, and recall.
2022-02-24
Muhati, Eric, Rawat, Danda B..  2021.  Adversarial Machine Learning for Inferring Augmented Cyber Agility Prediction. IEEE INFOCOM 2021 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–6.
Security analysts conduct continuous evaluations of cyber-defense tools to keep pace with advanced and persistent threats. Cyber agility has become a critical proactive security resource that makes it possible to measure defense adjustments and reactions to rising threats. Subsequently, machine learning has been applied to support cyber agility prediction as an essential effort to anticipate future security performance. Nevertheless, apt and treacherous actors motivated by economic incentives continue to prevail in circumventing machine learning-based protection tools. Adversarial learning, widely applied to computer security, especially intrusion detection, has emerged as a new area of concern for the recently recognized critical cyber agility prediction. The rationale is, if a sophisticated malicious actor obtains the cyber agility parameters, correct prediction cannot be guaranteed. Unless with a demonstration of white-box attack failures. The challenge lies in recognizing that unconstrained adversaries hold vast potential capabilities. In practice, they could have perfect-knowledge, i.e., a full understanding of the defense tool in use. We address this challenge by proposing an adversarial machine learning approach that achieves accurate cyber agility forecast through mapped nefarious influence on static defense tools metrics. Considering an adversary would aim at influencing perilous confidence in a defense tool, we demonstrate resilient cyber agility prediction through verified attack signatures in dynamic learning windows. After that, we compare cyber agility prediction under negative influence with and without our proposed dynamic learning windows. Our numerical results show the model's execution degrades without adversarial machine learning. Such a feigned measure of performance could lead to incorrect software security patching.