Title | Feature Vulnerability and Robustness Assessment against Adversarial Machine Learning Attacks |
Publication Type | Conference Paper |
Year of Publication | 2021 |
Authors | McCarthy, Andrew, Andriotis, Panagiotis, Ghadafi, Essam, Legg, Phil |
Conference Name | 2021 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA) |
Keywords | adversarial learning, attack surface, denial-of-service attack, face recognition, feature extraction, Intrusion detection, machine learning, Metrics, network traffic analysis, Perturbation methods, pubcrawl, resilience, Resiliency, Roads, Scalability, telecommunication traffic |
Abstract | Whilst machine learning has been widely adopted for various domains, it is important to consider how such techniques may be susceptible to malicious users through adversarial attacks. Given a trained classifier, a malicious attack may attempt to craft a data observation whereby the data features purposefully trigger the classifier to yield incorrect responses. This has been observed in various image classification tasks, including falsifying road sign detection and facial recognition, which could have severe consequences in real-world deployment. In this work, we investigate how these attacks could impact on network traffic analysis, and how a system could perform misclassification of common network attacks such as DDoS attacks. Using the CICIDS2017 data, we examine how vulnerable the data features used for intrusion detection are to perturbation attacks using FGSM adversarial examples. As a result, our method provides a defensive approach for assessing feature robustness that seeks to balance between classification accuracy whilst minimising the attack surface of the feature space. |
DOI | 10.1109/CyberSA52016.2021.9478199 |
Citation Key | mccarthy_feature_2021 |