Visible to the public Biblio

Filters: Keyword is graph convolutional networks  [Clear All Filters]
2021-09-21
Chai, Yuhan, Qiu, Jing, Su, Shen, Zhu, Chunsheng, Yin, Lihua, Tian, Zhihong.  2020.  LGMal: A Joint Framework Based on Local and Global Features for Malware Detection. 2020 International Wireless Communications and Mobile Computing (IWCMC). :463–468.
With the gradual advancement of smart city construction, various information systems have been widely used in smart cities. In order to obtain huge economic benefits, criminals frequently invade the information system, which leads to the increase of malware. Malware attacks not only seriously infringe on the legitimate rights and interests of users, but also cause huge economic losses. Signature-based malware detection algorithms can only detect known malware, and are susceptible to evasion techniques such as binary obfuscation. Behavior-based malware detection methods can solve this problem well. Although there are some malware behavior analysis works, they may ignore semantic information in the malware API call sequence. In this paper, we design a joint framework based on local and global features for malware detection to solve the problem of network security of smart cities, called LGMal, which combines the stacked convolutional neural network and graph convolutional networks. Specially, the stacked convolutional neural network is used to learn API call sequence information to capture local semantic features and the graph convolutional networks is used to learn API call semantic graph structure information to capture global semantic features. Experiments on Alibaba Cloud Security Malware Detection datasets show that the joint framework gets better results. The experimental results show that the precision is 87.76%, the recall is 88.08%, and the F1-measure is 87.79%. We hope this paper can provide a useful way for malware detection and protect the network security of smart city.
2020-08-17
Regol, Florence, Pal, Soumyasundar, Coates, Mark.  2019.  Node Copying for Protection Against Graph Neural Network Topology Attacks. 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). :709–713.
Adversarial attacks can affect the performance of existing deep learning models. With the increased interest in graph based machine learning techniques, there have been investigations which suggest that these models are also vulnerable to attacks. In particular, corruptions of the graph topology can degrade the performance of graph based learning algorithms severely. This is due to the fact that the prediction capability of these algorithms relies mostly on the similarity structure imposed by the graph connectivity. Therefore, detecting the location of the corruption and correcting the induced errors becomes crucial. There has been some recent work which tackles the detection problem, however these methods do not address the effect of the attack on the downstream learning task. In this work, we propose an algorithm that uses node copying to mitigate the degradation in classification that is caused by adversarial attacks. The proposed methodology is applied only after the model for the downstream task is trained and the added computation cost scales well for large graphs. Experimental results show the effectiveness of our approach for several real world datasets.
2020-06-26
Jiang, Jianguo, Chen, Jiuming, Gu, Tianbo, Choo, Kim-Kwang Raymond, Liu, Chao, Yu, Min, Huang, Weiqing, Mohapatra, Prasant.  2019.  Anomaly Detection with Graph Convolutional Networks for Insider Threat and Fraud Detection. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :109—114.

Anomaly detection generally involves the extraction of features from entities' or users' properties, and the design of anomaly detection models using machine learning or deep learning algorithms. However, only considering entities' property information could lead to high false positives. We posit the importance of also considering connections or relationships between entities in the detecting of anomalous behaviors and associated threat groups. Therefore, in this paper, we design a GCN (graph convolutional networks) based anomaly detection model to detect anomalous behaviors of users and malicious threat groups. The GCN model could characterize entities' properties and structural information between them into graphs. This allows the GCN based anomaly detection model to detect both anomalous behaviors of individuals and associated anomalous groups. We then evaluate the proposed model using a real-world insider threat data set. The results show that the proposed model outperforms several state-of-art baseline methods (i.e., random forest, logistic regression, SVM, and CNN). Moreover, the proposed model can also be applied to other anomaly detection applications.

2019-02-08
Zügner, Daniel, Akbarnejad, Amir, Günnemann, Stephan.  2018.  Adversarial Attacks on Neural Networks for Graph Data. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :2847-2856.
Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model.We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.