Graph Neural Networks(GNNs) are powerful methods to analyze the non-Euclidean data. As a dominant type of GNN, Graph Convolutional Networks(GCNs) have wide applications. However, the analysis of the generalization error for GCNs with multilayer is limited. Based on the review of single-layer GCNs, this paper analyzes the generalization error of two-layers GCNs and extends the conclusion to the general GCNs models. Firstly, this paper examines two-layers GCNs and obtains the stability of the GCNs algo-rithm. And then, based on this algorithmic stability, the generalization stability of multilayer GCNs is obtained. This paper shows that the algorithmic stability of GCNs depends upon the graph filters and its product with node features as well as the training procedure. Furthermore, the generalization error gap of GCNs tends to be enlarged with more layers, which can interpret why GCNs with deeper layers have relatively poorer performance in test datasets. (c) 2020 Elsevier B.V. All rights reserved.
展开▼