Adversarial Learning Attacks on Graph-based IoT Malware Detection Systems
Title | Adversarial Learning Attacks on Graph-based IoT Malware Detection Systems |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Abusnaina, A., Khormali, A., Alasmary, H., Park, J., Anwar, A., Mohaisen, A. |
Conference Name | 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS) |
Date Published | July 2019 |
Publisher | IEEE |
ISBN Number | 978-1-7281-2519-0 |
Keywords | adversarial learning, augmentation method, benign sample, CFG-based features, control flow graph-based features, craft adversarial IoT software, Deep Learning, deep learning networks, feature extraction, GEA approach, generated adversarial sample, graph analysis, graph embedding, graph theory, graph-based IoT malware detection systems, Human Behavior, Internet of Things, invasive software, IoT malware samples, learning (artificial intelligence), Malware, malware analysis, malware detection, Metrics, off-the-shelf adversarial attack methods, privacy, pubcrawl, resilience, Resiliency, robust detection tools, security, static analysis, Tools |
Abstract | IoT malware detection using control flow graph (CFG)-based features and deep learning networks are widely explored. The main goal of this study is to investigate the robustness of such models against adversarial learning. We designed two approaches to craft adversarial IoT software: off-the-shelf methods and Graph Embedding and Augmentation (GEA) method. In the off-the-shelf adversarial learning attack methods, we examine eight different adversarial learning methods to force the model to misclassification. The GEA approach aims to preserve the functionality and practicality of the generated adversarial sample through a careful embedding of a benign sample to a malicious one. Intensive experiments are conducted to evaluate the performance of the proposed method, showing that off-the-shelf adversarial attack methods are able to achieve a misclassification rate of 100%. In addition, we observed that the GEA approach is able to misclassify all IoT malware samples as benign. The findings of this work highlight the essential need for more robust detection tools against adversarial learning, including features that are not easy to manipulate, unlike CFG-based features. The implications of the study are quite broad, since the approach challenged in this work is widely used for other applications using graphs. |
URL | https://ieeexplore.ieee.org/document/8885251 |
DOI | 10.1109/ICDCS.2019.00130 |
Citation Key | abusnaina_adversarial_2019 |
- privacy
- invasive software
- IoT malware samples
- learning (artificial intelligence)
- malware
- Malware Analysis
- malware detection
- Metrics
- off-the-shelf adversarial attack methods
- Internet of Things
- pubcrawl
- resilience
- Resiliency
- robust detection tools
- security
- static analysis
- tools
- adversarial learning
- Human behavior
- graph-based IoT malware detection systems
- graph theory
- graph embedding
- graph analysis
- generated adversarial sample
- GEA approach
- feature extraction
- deep learning networks
- deep learning
- craft adversarial IoT software
- control flow graph-based features
- CFG-based features
- benign sample
- augmentation method