Robustness Analysis of CNN-based Malware Family Classification Methods Against Various Adversarial Attacks
Title | Robustness Analysis of CNN-based Malware Family Classification Methods Against Various Adversarial Attacks |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Choi, Seok-Hwan, Shin, Jin-Myeong, Liu, Peng, Choi, Yoon-Ho |
Conference Name | 2019 IEEE Conference on Communications and Network Security (CNS) |
Date Published | jun |
Publisher | IEEE |
ISBN Number | 978-1-5386-7117-7 |
Keywords | adversarial attacks, adversarial example, Analytical models, CNN-based classification methods, CNN-based malware family classification method, Conferences, convolutional neural nets, convolutional neural network-based malware family classification methods, convolutional neural networks, feature extraction, Human Behavior, image classification, Image color analysis, image colour analysis, image-based classification methods, imperceptible nonrandom perturbations, input image, invasive software, Malware, malware classification, malware family classification, Metrics, Microsoft malware dataset, privacy, pubcrawl, resilience, Resiliency, Robustness, security |
Abstract | As malware family classification methods, image-based classification methods have attracted much attention. Especially, due to the fast classification speed and the high classification accuracy, Convolutional Neural Network (CNN)-based malware family classification methods have been studied. However, previous studies on CNN-based classification methods focused only on improving the classification accuracy of malware families. That is, previous studies did not consider the cases that the accuracy of CNN-based malware classification methods can be decreased under the existence of adversarial attacks. In this paper, we analyze the robustness of various CNN-based malware family classification models under adversarial attacks. While adding imperceptible non-random perturbations to the input image, we measured how the accuracy of the CNN-based malware family classification model can be affected. Also, we showed the influence of three significant visualization parameters(i.e., the size of input image, dimension of input image, and conversion color of a special character)on the accuracy variation under adversarial attacks. From the evaluation results using the Microsoft malware dataset, we showed that even the accuracy over 98% of the CNN-based malware family classification method can be decreased to less than 7%. |
URL | https://ieeexplore.ieee.org/document/8802809/ |
DOI | 10.1109/CNS.2019.8802809 |
Citation Key | choi_robustness_2019 |
- image-based classification methods
- security
- Robustness
- Resiliency
- resilience
- pubcrawl
- privacy
- Microsoft malware dataset
- Metrics
- malware family classification
- malware classification
- malware
- invasive software
- input image
- imperceptible nonrandom perturbations
- adversarial attacks
- image colour analysis
- Image color analysis
- image classification
- Human behavior
- feature extraction
- convolutional neural networks
- convolutional neural network-based malware family classification methods
- convolutional neural nets
- Conferences
- CNN-based malware family classification method
- CNN-based classification methods
- Analytical models
- adversarial example