Visible to the public Binarized Attributed Network Embedding via Neural Networks

TitleBinarized Attributed Network Embedding via Neural Networks
Publication TypeConference Paper
Year of Publication2020
AuthorsXia, H., Gao, N., Peng, J., Mo, J., Wang, J.
Conference Name2020 International Joint Conference on Neural Networks (IJCNN)
Date Publishedjul
Keywordsattribute similarity penalizing term, Attributed Network, autoencoder, binary code learning, Binary codes, composability, data mining, data reduction, dimension reduction, feature aggregation, feature extraction, Hamming space, Knowledge engineering, learned binary representation vectors, learning (artificial intelligence), matrix algebra, Metrics, network coding, network embedding, network theory (graphs), neural nets, neural network training, Neural networks, node classification, node clustering, Optimization, pattern classification, pattern clustering, pubcrawl, relaxation-quantization method, resilience, Resiliency, Task Analysis, unsupervised neural based binarized attributed network embedding, Weisfeiler-Lehman Proximity Matirx, Weisfeiler-Lehman proximity matrix
AbstractTraditional attributed network embedding methods are designed to map structural and attribute information of networks jointly into a continuous Euclidean space, while recently a novel branch of them named binarized attributed network embedding has emerged to learn binary codes in Hamming space, aiming to save time and memory costs and to naturally fit node retrieval task. However, current binarized attributed network embedding methods are scarce and mostly ignore the local attribute similarity between each pair of nodes. Besides, none of them attempt to control the independency of each dimension(bit) of the learned binary representation vectors. As existing methods still need improving, we propose an unsupervised Neural-based Binarized Attributed Network Embedding (NBANE) approach. Firstly, we inherit the Weisfeiler-Lehman proximity matrix from predecessors to aggregate high-order features for each node. Secondly, we feed the aggregated features into an autoencoder with the attribute similarity penalizing term and the orthogonality term to make further dimension reduction. To solve the problem of integer optimization we adopt the relaxation-quantization method during the process of training neural networks. Empirically, we evaluate the performance of NBANE through node classification and clustering tasks on three real-world datasets and study a case on fast retrieval in academic networks. Our method achieves better performance over state- of-the-art baselines methods of various types.
DOI10.1109/IJCNN48605.2020.9206717
Citation Keyxia_binarized_2020