Title | A Survey of Explainable Graph Neural Networks for Cyber Malware Analysis |
Publication Type | Conference Paper |
Year of Publication | 2022 |
Authors | Warmsley, Dana, Waagen, Alex, Xu, Jiejun, Liu, Zhining, Tong, Hanghang |
Conference Name | 2022 IEEE International Conference on Big Data (Big Data) |
Keywords | Big Data, classification, Computational modeling, cybersecurity, Explainability, graph neural networks, graph theory, Human Behavior, machine learning, Malware, malware analysis, Measurement, Metrics, privacy, pubcrawl, resilience, Resiliency, Resiliency Coordinator, telecommunication traffic |
Abstract | Malicious cybersecurity activities have become increasingly worrisome for individuals and companies alike. While machine learning methods like Graph Neural Networks (GNNs) have proven successful on the malware detection task, their output is often difficult to understand. Explainable malware detection methods are needed to automatically identify malicious programs and present results to malware analysts in a way that is human interpretable. In this survey, we outline a number of GNN explainability methods and compare their performance on a real-world malware detection dataset. Specifically, we formulated the detection problem as a graph classification problem on the malware Control Flow Graphs (CFGs). We find that gradient-based methods outperform perturbation-based methods in terms of computational expense and performance on explainer-specific metrics (e.g., Fidelity and Sparsity). Our results provide insights into designing new GNN-based models for cyber malware detection and attribution. |
DOI | 10.1109/BigData55660.2022.10020943 |
Citation Key | warmsley_survey_2022 |