Visible to the public Adversarial Examples Against Image-based Malware Classification Systems

TitleAdversarial Examples Against Image-based Malware Classification Systems
Publication TypeConference Paper
Year of Publication2019
AuthorsVi, Bao Ngoc, Noi Nguyen, Huu, Nguyen, Ngoc Tran, Truong Tran, Cao
Conference Name2019 11th International Conference on Knowledge and Systems Engineering (KSE)
Date PublishedOct. 2019
PublisherIEEE
ISBN Number978-1-7281-3003-3
Keywordsadversarial attack, adversarial attacks, Computer vision, convolution neural network, convolutional neural nets, convolutional neural network malware classifiers, convolutional neural networks, data visualisation, DL-based classification systems, Human Behavior, image classification, image-based malware classification systems, invasive software, learning (artificial intelligence), machine learning, mal-ware classification techniques, Malware, malware classification, malware files, Metrics, Perturbation methods, privacy, pubcrawl, resilience, Resiliency, Robustness, visualization
Abstract

Malicious software, known as malware, has become urgently serious threat for computer security, so automatic mal-ware classification techniques have received increasing attention. In recent years, deep learning (DL) techniques for computer vision have been successfully applied for malware classification by visualizing malware files and then using DL to classify visualized images. Although DL-based classification systems have been proven to be much more accurate than conventional ones, these systems have been shown to be vulnerable to adversarial attacks. However, there has been little research to consider the danger of adversarial attacks to visualized image-based malware classification systems. This paper proposes an adversarial attack method based on the gradient to attack image-based malware classification systems by introducing perturbations on resource section of PE files. The experimental results on the Malimg dataset show that by a small interference, the proposed method can achieve success attack rate when challenging convolutional neural network malware classifiers.

URLhttps://ieeexplore.ieee.org/document/8919481
DOI10.1109/KSE.2019.8919481
Citation Keyvi_adversarial_2019