Biblio

Filters: Author is Venkatesan, Sridhar  [Clear All Filters]
2023-06-22
Ho, Samson, Reddy, Achyut, Venkatesan, Sridhar, Izmailov, Rauf, Chadha, Ritu, Oprea, Alina.  2022.  Data Sanitization Approach to Mitigate Clean-Label Attacks Against Malware Detection Systems. MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM). :993–998.
Machine learning (ML) models are increasingly being used in the development of Malware Detection Systems. Existing research in this area primarily focuses on developing new architectures and feature representation techniques to improve the accuracy of the model. However, recent studies have shown that existing state-of-the art techniques are vulnerable to adversarial machine learning (AML) attacks. Among those, data poisoning attacks have been identified as a top concern for ML practitioners. A recent study on clean-label poisoning attacks in which an adversary intentionally crafts training samples in order for the model to learn a backdoor watermark was shown to degrade the performance of state-of-the-art classifiers. Defenses against such poisoning attacks have been largely under-explored. We investigate a recently proposed clean-label poisoning attack and leverage an ensemble-based Nested Training technique to remove most of the poisoned samples from a poisoned training dataset. Our technique leverages the relatively large sensitivity of poisoned samples to feature noise that disproportionately affects the accuracy of a backdoored model. In particular, we show that for two state-of-the art architectures trained on the EMBER dataset affected by the clean-label attack, the Nested Training approach improves the accuracy of backdoor malware samples from 3.42% to 93.2%. We also show that samples produced by the clean-label attack often successfully evade malware classification even when the classifier is not poisoned during training. However, even in such scenarios, our Nested Training technique can mitigate the effect of such clean-label-based evasion attacks by recovering the model's accuracy of malware detection from 3.57% to 93.2%.
ISSN: 2155-7586
2022-04-12
Venkatesan, Sridhar, Sikka, Harshvardhan, Izmailov, Rauf, Chadha, Ritu, Oprea, Alina, de Lucia, Michael J..  2021.  Poisoning Attacks and Data Sanitization Mitigations for Machine Learning Models in Network Intrusion Detection Systems. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :874—879.
Among many application domains of machine learning in real-world settings, cyber security can benefit from more automated techniques to combat sophisticated adversaries. Modern network intrusion detection systems leverage machine learning models on network logs to proactively detect cyber attacks. However, the risk of adversarial attacks against machine learning used in these cyber settings is not fully explored. In this paper, we investigate poisoning attacks at training time against machine learning models in constrained cyber environments such as network intrusion detection; we also explore mitigations of such attacks based on training data sanitization. We consider the setting of poisoning availability attacks, in which an attacker can insert a set of poisoned samples at training time with the goal of degrading the accuracy of the deployed model. We design a white-box, realizable poisoning attack that reduced the original model accuracy from 95% to less than 50 % by generating mislabeled samples in close vicinity of a selected subset of training points. We also propose a novel Nested Training method as a defense against these attacks. Our defense includes a diversified ensemble of classifiers, each trained on a different subset of the training set. We use the disagreement of the classifiers' predictions as a data sanitization method, and show that an ensemble of 10 SVM classifiers is resilient to a large fraction of poisoning samples, up to 30% of the training data.
2020-05-15
Sugrim, Shridatt, Venkatesan, Sridhar, Youzwak, Jason A., Chiang, Cho-Yu J., Chadha, Ritu, Albanese, Massimiliano, Cam, Hasan.  2018.  Measuring the Effectiveness of Network Deception. 2018 IEEE International Conference on Intelligence and Security Informatics (ISI). :142—147.

Cyber reconnaissance is the process of gathering information about a target network for the purpose of compromising systems within that network. Network-based deception has emerged as a promising approach to disrupt attackers' reconnaissance efforts. However, limited work has been done so far on measuring the effectiveness of network-based deception. Furthermore, given that Software-Defined Networking (SDN) facilitates cyber deception by allowing network traffic to be modified and injected on-the-fly, understanding the effectiveness of employing different cyber deception strategies is critical. In this paper, we present a model to study the reconnaissance surface of a network and model the process of gathering information by attackers as interactions with a cyber defensive system that may use deception. To capture the evolution of the attackers' knowledge during reconnaissance, we design a belief system that is updated by using a Bayesian inference method. For the proposed model, we present two metrics based on KL-divergence to quantify the effectiveness of network deception. We tested the model and the two metrics by conducting experiments with a simulated attacker in an SDN-based deception system. The results of the experiments match our expectations, providing support for the model and proposed metrics.

2018-11-19
Venkatesan, Sridhar, Albanese, Massimiliano, Shah, Ankit, Ganesan, Rajesh, Jajodia, Sushil.  2017.  Detecting Stealthy Botnets in a Resource-Constrained Environment Using Reinforcement Learning. Proceedings of the 2017 Workshop on Moving Target Defense. :75–85.

Modern botnets can persist in networked systems for extended periods of time by operating in a stealthy manner. Despite the progress made in the area of botnet prevention, detection, and mitigation, stealthy botnets continue to pose a significant risk to enterprises. Furthermore, existing enterprise-scale solutions require significant resources to operate effectively, thus they are not practical. In order to address this important problem in a resource-constrained environment, we propose a reinforcement learning based approach to optimally and dynamically deploy a limited number of defensive mechanisms, namely honeypots and network-based detectors, within the target network. The ultimate goal of the proposed approach is to reduce the lifetime of stealthy botnets by maximizing the number of bots identified and taken down through a sequential decision-making process. We provide a proof-of-concept of the proposed approach, and study its performance in a simulated environment. The results show that the proposed approach is promising in protecting against stealthy botnets.

2017-05-22
Wright, Mason, Venkatesan, Sridhar, Albanese, Massimiliano, Wellman, Michael P..  2016.  Moving Target Defense Against DDoS Attacks: An Empirical Game-Theoretic Analysis. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :93–104.

Distributed denial-of-service attacks are an increasing problem facing web applications, for which many defense techniques have been proposed, including several moving-target strategies. These strategies typically work by relocating targeted services over time, increasing uncertainty for the attacker, while trying not to disrupt legitimate users or incur excessive costs. Prior work has not shown, however, whether and how a rational defender would choose a moving-target method against an adaptive attacker, and under what conditions. We formulate a denial-of-service scenario as a two-player game, and solve a restricted-strategy version of the game using the methods of empirical game-theoretic analysis. Using agent-based simulation, we evaluate the performance of strategies from prior literature under a variety of attacks and environmental conditions. We find evidence for the strategic stability of various proposed strategies, such as proactive server movement, delayed attack timing, and suspected insider blocking, along with guidelines for when each is likely to be most effective.

2017-10-03
Venkatesan, Sridhar, Albanese, Massimiliano, Cybenko, George, Jajodia, Sushil.  2016.  A Moving Target Defense Approach to Disrupting Stealthy Botnets. Proceeding MTD '16 Proceedings of the 2016 ACM Workshop on Moving Target Defense Pages 37-46 .

Botnets are increasingly being used for exfiltrating sensitive data from mission-critical systems. Research has shown that botnets have become extremely sophisticated and can operate in stealth mode by minimizing their host and network footprint. In order to defeat exfiltration by modern botnets, we propose a moving target defense approach for dynamically deploying detectors across a network. Specifically, we propose several strategies based on centrality measures to periodically change the placement of detectors. Our objective is to increase the attacker's effort and likelihood of detection by creating uncertainty about the location of detectors and forcing botmasters to perform additional actions in an attempt to create detector-free paths through the network. We present metrics to evaluate the proposed strategies and an algorithm to compute a lower bound on the detection probability. We validate our approach through simulations, and results confirm that the proposed solution effectively reduces the likelihood of successful exfiltration campaigns.

2017-06-27
Venkatesan, Sridhar, Albanese, Massimiliano, Cybenko, George, Jajodia, Sushil.  2016.  A Moving Target Defense Approach to Disrupting Stealthy Botnets. Proceedings of the 2016 ACM Workshop on Moving Target Defense. :37–46.

Botnets are increasingly being used for exfiltrating sensitive data from mission-critical systems. Research has shown that botnets have become extremely sophisticated and can operate in stealth mode by minimizing their host and network footprint. In order to defeat exfiltration by modern botnets, we propose a moving target defense approach for dynamically deploying detectors across a network. Specifically, we propose several strategies based on centrality measures to periodically change the placement of detectors. Our objective is to increase the attacker's effort and likelihood of detection by creating uncertainty about the location of detectors and forcing botmasters to perform additional actions in an attempt to create detector-free paths through the network. We present metrics to evaluate the proposed strategies and an algorithm to compute a lower bound on the detection probability. We validate our approach through simulations, and results confirm that the proposed solution effectively reduces the likelihood of successful exfiltration campaigns.