Visible to the public Adversarial Learning Attacks on Graph-based IoT Malware Detection Systems

TitleAdversarial Learning Attacks on Graph-based IoT Malware Detection Systems
Publication TypeConference Paper
Year of Publication2019
AuthorsAbusnaina, A., Khormali, A., Alasmary, H., Park, J., Anwar, A., Mohaisen, A.
Conference Name2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS)
Date PublishedJuly 2019
PublisherIEEE
ISBN Number978-1-7281-2519-0
Keywordsadversarial learning, augmentation method, benign sample, CFG-based features, control flow graph-based features, craft adversarial IoT software, Deep Learning, deep learning networks, feature extraction, GEA approach, generated adversarial sample, graph analysis, graph embedding, graph theory, graph-based IoT malware detection systems, Human Behavior, Internet of Things, invasive software, IoT malware samples, learning (artificial intelligence), Malware, malware analysis, malware detection, Metrics, off-the-shelf adversarial attack methods, privacy, pubcrawl, resilience, Resiliency, robust detection tools, security, static analysis, Tools
Abstract

IoT malware detection using control flow graph (CFG)-based features and deep learning networks are widely explored. The main goal of this study is to investigate the robustness of such models against adversarial learning. We designed two approaches to craft adversarial IoT software: off-the-shelf methods and Graph Embedding and Augmentation (GEA) method. In the off-the-shelf adversarial learning attack methods, we examine eight different adversarial learning methods to force the model to misclassification. The GEA approach aims to preserve the functionality and practicality of the generated adversarial sample through a careful embedding of a benign sample to a malicious one. Intensive experiments are conducted to evaluate the performance of the proposed method, showing that off-the-shelf adversarial attack methods are able to achieve a misclassification rate of 100%. In addition, we observed that the GEA approach is able to misclassify all IoT malware samples as benign. The findings of this work highlight the essential need for more robust detection tools against adversarial learning, including features that are not easy to manipulate, unlike CFG-based features. The implications of the study are quite broad, since the approach challenged in this work is widely used for other applications using graphs.

URLhttps://ieeexplore.ieee.org/document/8885251
DOI10.1109/ICDCS.2019.00130
Citation Keyabusnaina_adversarial_2019