Biblio
Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually - been rather difficult.In this work, we introduce a method for generating strong adversaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.
IoT malware detection using control flow graph (CFG)-based features and deep learning networks are widely explored. The main goal of this study is to investigate the robustness of such models against adversarial learning. We designed two approaches to craft adversarial IoT software: off-the-shelf methods and Graph Embedding and Augmentation (GEA) method. In the off-the-shelf adversarial learning attack methods, we examine eight different adversarial learning methods to force the model to misclassification. The GEA approach aims to preserve the functionality and practicality of the generated adversarial sample through a careful embedding of a benign sample to a malicious one. Intensive experiments are conducted to evaluate the performance of the proposed method, showing that off-the-shelf adversarial attack methods are able to achieve a misclassification rate of 100%. In addition, we observed that the GEA approach is able to misclassify all IoT malware samples as benign. The findings of this work highlight the essential need for more robust detection tools against adversarial learning, including features that are not easy to manipulate, unlike CFG-based features. The implications of the study are quite broad, since the approach challenged in this work is widely used for other applications using graphs.