Visible to the public Biblio

Filters: Author is Yan, Guanhua  [Clear All Filters]
2023-09-18
Herath, Jerome Dinal, Wakodikar, Priti Prabhakar, Yang, Ping, Yan, Guanhua.  2022.  CFGExplainer: Explaining Graph Neural Network-Based Malware Classification from Control Flow Graphs. 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :172—184.
With the ever increasing threat of malware, extensive research effort has been put on applying Deep Learning for malware classification tasks. Graph Neural Networks (GNNs) that process malware as Control Flow Graphs (CFGs) have shown great promise for malware classification. However, these models are viewed as black-boxes, which makes it hard to validate and identify malicious patterns. To that end, we propose CFG-Explainer, a deep learning based model for interpreting GNN-oriented malware classification results. CFGExplainer identifies a subgraph of the malware CFG that contributes most towards classification and provides insight into importance of the nodes (i.e., basic blocks) within it. To the best of our knowledge, CFGExplainer is the first work that explains GNN-based mal-ware classification. We compared CFGExplainer against three explainers, namely GNNExplainer, SubgraphX and PGExplainer, and showed that CFGExplainer is able to identify top equisized subgraphs with higher classification accuracy than the other three models.
2019-01-21
Shu, Zhan, Yan, Guanhua.  2018.  Ensuring Deception Consistency for FTP Services Hardened Against Advanced Persistent Threats. Proceedings of the 5th ACM Workshop on Moving Target Defense. :69–79.
As evidenced by numerous high-profile security incidents such as the Target data breach and the Equifax hack, APTs (Advanced Persistent Threats) can significantly compromise the trustworthiness of cyber space. This work explores how to improve the effectiveness of cyber deception in hardening FTP (File Transfer Protocol) services against APTs. The main objective of our work is to ensure deception consistency: when the attackers are trapped, they can only make observations that are consistent with what they have seen already so that they cannot recognize the deceptive environment. To achieve deception consistency, we use logic constraints to characterize an attacker's best knowledge (either positive, negative, or uncertain). When migrating the attacker's FTP connection into a contained environment, we use these logic constraints to instantiate a new FTP file system that is guaranteed free of inconsistency. We performed deception experiments with student participants who just completed a computer security course. Following the design of Turing tests, we find that the participants' chances of recognizing deceptive environments are close to random guesses. Our experiments also confirm the importance of observation consistency in identifying deception.