首页> 外文期刊>SIGKDD explorations >Learning Deep Network Representations with Adversarially Regularized Autoencoders
【24h】

Learning Deep Network Representations with Adversarially Regularized Autoencoders

机译:使用前瞻性正常化的AutoEncoders学习深度网络表示

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

The problem of network representation learning, also known as network embedding, arises in many machine learning tasks assuming that there exist a small number of variabilities in the vertex representations which can capture the "semantics" of the original network structure. Most existing network embedding models, with shallow or deep architectures, learn vertex representations from the sampled vertex sequences such that the low-dimensional embeddings preserve the locality property and/or global reconstruction capability. The resultant representations, however, are difficult for model generalization due to the intrinsic sparsity of sampled sequences from the input network. As such, an ideal approach to address the problem is to generate vertex representations by learning a probability density function over the sampled sequences. However, in many cases, such a distribution in a low-dimensional manifold may not always have an analytic form. In this study, we propose to learn the network representations with adversarially regularized autoencoders (NetRA). NetRA learns smoothly regularized vertex representations that well capture the network structure through jointly considering both locality-preserving and global reconstruction constraints. The joint inference is encapsulated in a generative adversarial training process to circumvent the requirement of an explicit prior distribution, and thus obtains better generalization performance. We demonstrate empirically how well key properties of the network structure are captured and the effectiveness of NetRA on a variety of tasks, including network reconstruction, link prediction, and multi-label classification.
机译:在许多机器学习任务中,在许多机器学习任务中出现了网络表示学习的问题,假设在顶点表示中存在少量的变量,这可以捕获原始网络结构的“语义”。大多数现有网络嵌入模型,具有浅或深层的架构,从采样的顶点序列中学习顶点表示,使得低维嵌入物保留了地区属性和/或全局重建能力。然而,由于来自输入网络的采样序列的内在稀疏性,所得到的表示难以模拟泛化。因此,解决问题的理想方法是通过在采样序列上学习概率密度函数来生成顶点表示。然而,在许多情况下,低维歧管中的这种分布可能并不总是具有分析形式。在这项研究中,我们建议使用对抗正规的AutoEncoders(Netra)的网络表示。 Netra学习顺利进行正常的顶点表示,通过联合考虑既有地方保留和全局重建约束,通过联合捕获网络结构。关节推断封装在生成的对抗性培训过程中,以规避明确的先前分配的要求,从而获得更好的泛化性能。我们经验展示了网络结构的关键特性以及Netra在各种任务中的有效性,包括网络重建,链路预测和多标签分类。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号