Biblio

Filters: Author is Izmailov, Rauf  [Clear All Filters]
2023-06-22
Ho, Samson, Reddy, Achyut, Venkatesan, Sridhar, Izmailov, Rauf, Chadha, Ritu, Oprea, Alina.  2022.  Data Sanitization Approach to Mitigate Clean-Label Attacks Against Malware Detection Systems. MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM). :993–998.
Machine learning (ML) models are increasingly being used in the development of Malware Detection Systems. Existing research in this area primarily focuses on developing new architectures and feature representation techniques to improve the accuracy of the model. However, recent studies have shown that existing state-of-the art techniques are vulnerable to adversarial machine learning (AML) attacks. Among those, data poisoning attacks have been identified as a top concern for ML practitioners. A recent study on clean-label poisoning attacks in which an adversary intentionally crafts training samples in order for the model to learn a backdoor watermark was shown to degrade the performance of state-of-the-art classifiers. Defenses against such poisoning attacks have been largely under-explored. We investigate a recently proposed clean-label poisoning attack and leverage an ensemble-based Nested Training technique to remove most of the poisoned samples from a poisoned training dataset. Our technique leverages the relatively large sensitivity of poisoned samples to feature noise that disproportionately affects the accuracy of a backdoored model. In particular, we show that for two state-of-the art architectures trained on the EMBER dataset affected by the clean-label attack, the Nested Training approach improves the accuracy of backdoor malware samples from 3.42% to 93.2%. We also show that samples produced by the clean-label attack often successfully evade malware classification even when the classifier is not poisoned during training. However, even in such scenarios, our Nested Training technique can mitigate the effect of such clean-label-based evasion attacks by recovering the model's accuracy of malware detection from 3.57% to 93.2%.
ISSN: 2155-7586
2022-04-12
Venkatesan, Sridhar, Sikka, Harshvardhan, Izmailov, Rauf, Chadha, Ritu, Oprea, Alina, de Lucia, Michael J..  2021.  Poisoning Attacks and Data Sanitization Mitigations for Machine Learning Models in Network Intrusion Detection Systems. MILCOM 2021 - 2021 IEEE Military Communications Conference (MILCOM). :874—879.
Among many application domains of machine learning in real-world settings, cyber security can benefit from more automated techniques to combat sophisticated adversaries. Modern network intrusion detection systems leverage machine learning models on network logs to proactively detect cyber attacks. However, the risk of adversarial attacks against machine learning used in these cyber settings is not fully explored. In this paper, we investigate poisoning attacks at training time against machine learning models in constrained cyber environments such as network intrusion detection; we also explore mitigations of such attacks based on training data sanitization. We consider the setting of poisoning availability attacks, in which an attacker can insert a set of poisoned samples at training time with the goal of degrading the accuracy of the deployed model. We design a white-box, realizable poisoning attack that reduced the original model accuracy from 95% to less than 50 % by generating mislabeled samples in close vicinity of a selected subset of training points. We also propose a novel Nested Training method as a defense against these attacks. Our defense includes a diversified ensemble of classifiers, each trained on a different subset of the training set. We use the disagreement of the classifiers' predictions as a data sanitization method, and show that an ensemble of 10 SVM classifiers is resilient to a large fraction of poisoning samples, up to 30% of the training data.
2018-03-05
Celik, Z. Berkay, McDaniel, Patrick, Izmailov, Rauf.  2017.  Feature Cultivation in Privileged Information-Augmented Detection. Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics. :73–80.

Modern detection systems use sensor outputs available in the deployment environment to probabilistically identify attacks. These systems are trained on past or synthetic feature vectors to create a model of anomalous or normal behavior. Thereafter, run-time collected sensor outputs are compared to the model to identify attacks (or the lack of attack). While this approach to detection has been proven to be effective in many environments, it is limited to training on only features that can be reliably collected at detection time. Hence, they fail to leverage the often vast amount of ancillary information available from past forensic analysis and post-mortem data. In short, detection systems do not train (and thus do not learn from) features that are unavailable or too costly to collect at run-time. Recent work proposed an alternate model construction approach that integrates forensic "privilege" information–-features reliably available at training time, but not at run-time–-to improve accuracy and resilience of detection systems. In this paper, we further evaluate two of proposed techniques to model training with privileged information: knowledge transfer, and model influence. We explore the cultivation of privileged features, the efficiency of those processes and their influence on the detection accuracy. We observe that the improved integration of privileged features makes the resulting detection models more accurate. Our evaluation shows that use of privileged information leads to up to 8.2% relative decrease in detection error for fast-flux bot detection over a system with no privileged information, and 5.5% for malware classification.