Visible to the public Learning Deep Network Representations with Adversarially Regularized Autoencoders

TitleLearning Deep Network Representations with Adversarially Regularized Autoencoders
Publication TypeConference Paper
Year of Publication2018
AuthorsYu, Wenchao, Zheng, Cheng, Cheng, Wei, Aggarwal, Charu C., Song, Dongjin, Zong, Bo, Chen, Haifeng, Wang, Wei
Conference NameProceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
PublisherACM
ISBN Number978-1-4503-5552-0
Keywordsautoencoder, GANs, Generative Adversarial Learning, generative adversarial networks, Metrics, network embedding, pubcrawl, Resiliency, Scalability
Abstract

The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the "semantics" of the original network structure. Most existing network embedding models, with shallow or deep architectures, learn vertex representations from the sampled vertex sequences such that the low-dimensional embeddings preserve the locality property and/or global reconstruction capability. The resultant representations, however, are difficult for model generalization due to the intrinsic sparsity of sampled sequences from the input network. As such, an ideal approach to address the problem is to generate vertex representations by learning a probability density function over the sampled sequences. However, in many cases, such a distribution in a low-dimensional manifold may not always have an analytic form. In this study, we propose to learn the network representations with adversarially regularized autoencoders (NetRA). NetRA learns smoothly regularized vertex representations that well capture the network structure through jointly considering both locality-preserving and global reconstruction constraints. The joint inference is encapsulated in a generative adversarial training process to circumvent the requirement of an explicit prior distribution, and thus obtains better generalization performance. We demonstrate empirically how well key properties of the network structure are captured and the effectiveness of NetRA on a variety of tasks, including network reconstruction, link prediction, and multi-label classification.

URLhttp://dx.doi.org/10.1145/3219819.3220000
DOI10.1145/3219819.3220000
Citation Keyyu_learning_2018