首页> 外文会议>IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining >When to Remember Where You Came from: Node Representation Learning in Higher-order Networks
【24h】

When to Remember Where You Came from: Node Representation Learning in Higher-order Networks

机译:何时记起您来自何处:高阶网络中的节点表示学习

获取原文
获取外文期刊封面目录资料

摘要

For trajectory data that tend to have beyond first-order (i.e., non-Markovian) dependencies, higher-order networks have been shown to accurately capture details lost with the standard aggregate network representation. At the same time, representation learning has shown success on a wide range of network tasks, removing the need to hand-craft features for these tasks. In this work, we propose a node representation learning framework called EVO or Embedding Variable Orders, which captures non-Markovian dependencies by combining work on higher-order networks with work on node embeddings. We show that EVO outperforms baselines in tasks where high-order dependencies are likely to matter, demonstrating the benefits of considering high-order dependencies in node embeddings. We also provide insights into when it does or does not help to capture these dependencies. To the best of our knowledge, this is the first work on representation learning for higher-order networks.
机译:对于趋于具有超出一阶(即非马尔可夫)依赖关系的轨迹数据,已显示出高阶网络可以准确地捕获标准聚合网络表示形式所丢失的细节。同时,表示学习已在各种网络任务中显示出成功,从而无需手工制作这些任务的功能。在这项工作中,我们提出了一个称为EVO或嵌入变量阶的节点表示学习框架,该框架通过将高阶网络上的工作与节点嵌入上的工作相结合来捕获非马尔可夫依存关系。我们表明,在可能涉及高阶相关性的任务中,EVO的性能优于基线,这表明了在节点嵌入中考虑高阶相关性的好处。我们还提供有关何时或不有助于捕获这些依赖关系的见解。据我们所知,这是针对高阶网络的表示学习的第一项工作。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号