Visible to the public Robustness Analysis of CNN-based Malware Family Classification Methods Against Various Adversarial Attacks

TitleRobustness Analysis of CNN-based Malware Family Classification Methods Against Various Adversarial Attacks
Publication TypeConference Paper
Year of Publication2019
AuthorsChoi, Seok-Hwan, Shin, Jin-Myeong, Liu, Peng, Choi, Yoon-Ho
Conference Name2019 IEEE Conference on Communications and Network Security (CNS)
Date Publishedjun
PublisherIEEE
ISBN Number978-1-5386-7117-7
Keywordsadversarial attacks, adversarial example, Analytical models, CNN-based classification methods, CNN-based malware family classification method, Conferences, convolutional neural nets, convolutional neural network-based malware family classification methods, convolutional neural networks, feature extraction, Human Behavior, image classification, Image color analysis, image colour analysis, image-based classification methods, imperceptible nonrandom perturbations, input image, invasive software, Malware, malware classification, malware family classification, Metrics, Microsoft malware dataset, privacy, pubcrawl, resilience, Resiliency, Robustness, security
Abstract

As malware family classification methods, image-based classification methods have attracted much attention. Especially, due to the fast classification speed and the high classification accuracy, Convolutional Neural Network (CNN)-based malware family classification methods have been studied. However, previous studies on CNN-based classification methods focused only on improving the classification accuracy of malware families. That is, previous studies did not consider the cases that the accuracy of CNN-based malware classification methods can be decreased under the existence of adversarial attacks. In this paper, we analyze the robustness of various CNN-based malware family classification models under adversarial attacks. While adding imperceptible non-random perturbations to the input image, we measured how the accuracy of the CNN-based malware family classification model can be affected. Also, we showed the influence of three significant visualization parameters(i.e., the size of input image, dimension of input image, and conversion color of a special character)on the accuracy variation under adversarial attacks. From the evaluation results using the Microsoft malware dataset, we showed that even the accuracy over 98% of the CNN-based malware family classification method can be decreased to less than 7%.

URLhttps://ieeexplore.ieee.org/document/8802809/
DOI10.1109/CNS.2019.8802809
Citation Keychoi_robustness_2019