首页> 外文期刊>Neurocomputing >The generalization error of graph convolutional networks may enlarge with more layers
【24h】

The generalization error of graph convolutional networks may enlarge with more layers

机译:图表卷积网络的泛化误差可能会增加更多的层

获取原文
获取原文并翻译 | 示例
       

摘要

Graph Neural Networks(GNNs) are powerful methods to analyze the non-Euclidean data. As a dominant type of GNN, Graph Convolutional Networks(GCNs) have wide applications. However, the analysis of the generalization error for GCNs with multilayer is limited. Based on the review of single-layer GCNs, this paper analyzes the generalization error of two-layers GCNs and extends the conclusion to the general GCNs models. Firstly, this paper examines two-layers GCNs and obtains the stability of the GCNs algo-rithm. And then, based on this algorithmic stability, the generalization stability of multilayer GCNs is obtained. This paper shows that the algorithmic stability of GCNs depends upon the graph filters and its product with node features as well as the training procedure. Furthermore, the generalization error gap of GCNs tends to be enlarged with more layers, which can interpret why GCNs with deeper layers have relatively poorer performance in test datasets. (c) 2020 Elsevier B.V. All rights reserved.
机译:图形神经网络(GNNS)是分析非欧几里德数据的强大方法。作为主要类型的GNN,图形卷积网络(GCNS)具有广泛的应用。然而,具有多层的GCN的泛化误差的分析是有限的。基于单层GCN的审查,本文分析了两层GCN的泛化误差,并将结论扩展到通用GCNS模型。首先,本文研究了两层GCN,获得了GCNS算法的稳定性。然后,基于该算法稳定性,获得多层GCN的泛化稳定性。本文表明,GCN的算法稳定性取决于图形过滤器及其具有节点特征的产品以及培训过程。此外,GCN的泛化误差差距往往被更多的层放大,更多层可以解释为什么具有更深层的GCN具有相对较差的测试数据集的性能。 (c)2020 Elsevier B.v.保留所有权利。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号