Adversarial Examples Against Image-based Malware Classification Systems
Title | Adversarial Examples Against Image-based Malware Classification Systems |
Publication Type | Conference Paper |
Year of Publication | 2019 |
Authors | Vi, Bao Ngoc, Noi Nguyen, Huu, Nguyen, Ngoc Tran, Truong Tran, Cao |
Conference Name | 2019 11th International Conference on Knowledge and Systems Engineering (KSE) |
Date Published | Oct. 2019 |
Publisher | IEEE |
ISBN Number | 978-1-7281-3003-3 |
Keywords | adversarial attack, adversarial attacks, Computer vision, convolution neural network, convolutional neural nets, convolutional neural network malware classifiers, convolutional neural networks, data visualisation, DL-based classification systems, Human Behavior, image classification, image-based malware classification systems, invasive software, learning (artificial intelligence), machine learning, mal-ware classification techniques, Malware, malware classification, malware files, Metrics, Perturbation methods, privacy, pubcrawl, resilience, Resiliency, Robustness, visualization |
Abstract | Malicious software, known as malware, has become urgently serious threat for computer security, so automatic mal-ware classification techniques have received increasing attention. In recent years, deep learning (DL) techniques for computer vision have been successfully applied for malware classification by visualizing malware files and then using DL to classify visualized images. Although DL-based classification systems have been proven to be much more accurate than conventional ones, these systems have been shown to be vulnerable to adversarial attacks. However, there has been little research to consider the danger of adversarial attacks to visualized image-based malware classification systems. This paper proposes an adversarial attack method based on the gradient to attack image-based malware classification systems by introducing perturbations on resource section of PE files. The experimental results on the Malimg dataset show that by a small interference, the proposed method can achieve success attack rate when challenging convolutional neural network malware classifiers. |
URL | https://ieeexplore.ieee.org/document/8919481 |
DOI | 10.1109/KSE.2019.8919481 |
Citation Key | vi_adversarial_2019 |
- learning (artificial intelligence)
- visualization
- Robustness
- Resiliency
- resilience
- pubcrawl
- privacy
- Perturbation methods
- Metrics
- malware files
- malware classification
- malware
- mal-ware classification techniques
- machine learning
- adversarial attack
- invasive software
- image-based malware classification systems
- image classification
- Human behavior
- DL-based classification systems
- data visualisation
- convolutional neural networks
- convolutional neural network malware classifiers
- convolutional neural nets
- convolution neural network
- computer vision
- adversarial attacks