Visible to the public Biblio

Filters: Keyword is network embedding  [Clear All Filters]
2021-02-23
Xia, H., Gao, N., Peng, J., Mo, J., Wang, J..  2020.  Binarized Attributed Network Embedding via Neural Networks. 2020 International Joint Conference on Neural Networks (IJCNN). :1—8.
Traditional attributed network embedding methods are designed to map structural and attribute information of networks jointly into a continuous Euclidean space, while recently a novel branch of them named binarized attributed network embedding has emerged to learn binary codes in Hamming space, aiming to save time and memory costs and to naturally fit node retrieval task. However, current binarized attributed network embedding methods are scarce and mostly ignore the local attribute similarity between each pair of nodes. Besides, none of them attempt to control the independency of each dimension(bit) of the learned binary representation vectors. As existing methods still need improving, we propose an unsupervised Neural-based Binarized Attributed Network Embedding (NBANE) approach. Firstly, we inherit the Weisfeiler-Lehman proximity matrix from predecessors to aggregate high-order features for each node. Secondly, we feed the aggregated features into an autoencoder with the attribute similarity penalizing term and the orthogonality term to make further dimension reduction. To solve the problem of integer optimization we adopt the relaxation-quantization method during the process of training neural networks. Empirically, we evaluate the performance of NBANE through node classification and clustering tasks on three real-world datasets and study a case on fast retrieval in academic networks. Our method achieves better performance over state- of-the-art baselines methods of various types.
2019-05-01
Yu, Wenchao, Zheng, Cheng, Cheng, Wei, Aggarwal, Charu C., Song, Dongjin, Zong, Bo, Chen, Haifeng, Wang, Wei.  2018.  Learning Deep Network Representations with Adversarially Regularized Autoencoders. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :2663-2671.

The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the "semantics" of the original network structure. Most existing network embedding models, with shallow or deep architectures, learn vertex representations from the sampled vertex sequences such that the low-dimensional embeddings preserve the locality property and/or global reconstruction capability. The resultant representations, however, are difficult for model generalization due to the intrinsic sparsity of sampled sequences from the input network. As such, an ideal approach to address the problem is to generate vertex representations by learning a probability density function over the sampled sequences. However, in many cases, such a distribution in a low-dimensional manifold may not always have an analytic form. In this study, we propose to learn the network representations with adversarially regularized autoencoders (NetRA). NetRA learns smoothly regularized vertex representations that well capture the network structure through jointly considering both locality-preserving and global reconstruction constraints. The joint inference is encapsulated in a generative adversarial training process to circumvent the requirement of an explicit prior distribution, and thus obtains better generalization performance. We demonstrate empirically how well key properties of the network structure are captured and the effectiveness of NetRA on a variety of tasks, including network reconstruction, link prediction, and multi-label classification.

2018-12-10
Yang, Dejian, Wang, Senzhang, Li, Chaozhuo, Zhang, Xiaoming, Li, Zhoujun.  2017.  From Properties to Links: Deep Network Embedding on Incomplete Graphs. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. :367–376.
As an effective way of learning node representations in networks, network embedding has attracted increasing research interests recently. Most existing approaches use shallow models and only work on static networks by extracting local or global topology information of each node as the algorithm input. It is challenging for such approaches to learn a desirable node representation on incomplete graphs with a large number of missing links or on dynamic graphs with new nodes joining in. It is even challenging for them to deeply fuse other types of data such as node properties into the learning process to help better represent the nodes with insufficient links. In this paper, we for the first time study the problem of network embedding on incomplete networks. We propose a Multi-View Correlation-learning based Deep Network Embedding method named MVC-DNE to incorporate both the network structure and the node properties for more effectively and efficiently perform network embedding on incomplete networks. Specifically, we consider the topology structure of the network and the node properties as two correlated views. The insight is that the learned representation vector of a node should reflect its characteristics in both views. Under a multi-view correlation learning based deep autoencoder framework, the structure view and property view embeddings are integrated and mutually reinforced through both self-view and cross-view learning. As MVC-DNE can learn a representation mapping function, it can directly generate the representation vectors for the new nodes without retraining the model. Thus it is especially more efficient than previous methods. Empirically, we evaluate MVC-DNE over three real network datasets on two data mining applications, and the results demonstrate that MVC-DNE significantly outperforms state-of-the-art methods.