首页> 外文期刊>Expert systems with applications >Node classification using kernel propagation in graph neural networks
【24h】

Node classification using kernel propagation in graph neural networks

机译:使用图形神经网络中的内核传播的节点分类

获取原文
获取原文并翻译 | 示例

摘要

In this work, we introduce a kernel propagation method that enables graph neural networks (GNNs) to leverage higher-order network structural information without increasing the complexity of the networks. Recent studies have introduced GNNs that include higher-order neighborhood features containing global network information by propagating node features using a higher-order feature propagation rule. Though these GNNs have shown to improve node classification performance, they fail to include local connectivity information. Alternatively, GNNs also concatenate increasing orders of adjacency matrix in deeper layers in order to include higher-order structural information. In addition to global network information, GNNs also make use of node features which are network and node dependent features that serve to distinguish structurally isomorphic sub-structures within graphs. However, such node features may not always be available or depending on the network, may lead to deteriorating classification performance. Hence, to resolve these limitations, we propose a kernel propagation method that introduces a pre-processing step for GNNs to leverage higher-order structural features. The higherorder structural features are computed using a weighted random walk matrix which is node independent while using the first-order spectral propagation rule which explicitly considers local connectivity. Through our benchmark experiments, we find that the computed higher-order structural features are capable of replacing node dependent features while performing node classification task with performance on par with the state of the art approaches. Further, we also find that including both node features and higher-order structural features increases the performance of GNNs on large scale benchmark networks considered in this study. Our results show that considering local and global structural information as input to GNNs lead to an improvement in node classification performance in the absence/presence of node features without loss of performance.
机译:在这项工作中,我们介绍了一个内核传播方法,使得图形神经网络(GNN)能够利用高阶网络结构信息而不增加网络的复杂性。最近的研究引入了GNN,包括使用高阶特征传播规则传播节点特征来包括包含全局网络信息的高阶邻域特征。虽然这些GNN已显示要提高节点分类性能,但它们无法包含本地连接信息。或者,GNN还在更深层中连接增加邻接矩阵的阶数序列,以便包括高阶结构信息。除了全球网络信息之外,GNN还利用网络和节点相关特征的节点特征,该功能用于区分图形内的结构上同构副结构。然而,这种节点特征可能并不总是可用或根据网络可用,可能导致分类性能恶化。因此,为了解决这些限制,我们提出了一种内核传播方法,它引入了GNN的预处理步骤,以利用高阶结构特征。使用加权随机步行矩阵计算的高阶结构特征,该矩阵在使用一阶频谱传播规则的同时是无关的,该规则显式考虑本地连接。通过我们的基准实验,我们发现计算的高阶结构特征能够更换节点依赖性功能,同时在执行节点分类任务时具有与现有技术的状态的表现相符。此外,我们还发现包括节点特征和高阶结构特征,增加了GNN在本研究中考虑的大规模基准网络上的性能。我们的结果表明,考虑本地和全局结构信息作为GNN的输入导致节点分类性能的改进,在没有损失的节点特征的情况下的缺失/存在下。

著录项

  • 来源
    《Expert systems with applications》 |2021年第7期|114655.1-114655.14|共14页
  • 作者单位

    Carnegie Mellon Univ Dept Mech Engn 5000 Forbes Ave Pittsburgh PA 15213 USA;

    Carnegie Mellon Univ Dept Mech Engn 5000 Forbes Ave Pittsburgh PA 15213 USA|Carnegie Mellon Univ Dept Machine Learning 5000 Forbes Ave Pittsburgh PA 15213 USA|Carnegie Mellon Univ Robot Inst 5000 Forbes Ave Pittsburgh PA 15213 USA|Carnegie Mellon Univ Dept Biomed Engn 5000 Forbes Ave Pittsburgh PA 15213 USA|Carnegie Mellon Univ CyLab Secur & Privacy Inst 5000 Forbes Ave Pittsburgh PA 15213 USA;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Deep learning; Node classification; Network embedding; Graph neural networks; Attention;

    机译:深度学习;节点分类;网络嵌入;图形神经网络;注意力;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号