首页> 外文会议>IEEE International Conference on Data Mining >Rational Neural Networks for Approximating Graph Convolution Operator on Jump Discontinuities
【24h】

Rational Neural Networks for Approximating Graph Convolution Operator on Jump Discontinuities

机译:跳跃间断点上逼近图卷积算子的有理神经网络

获取原文

摘要

For node level graph encoding, a recent important state-of-art method is the graph convolutional networks (GCN), which nicely integrate local vertex features and graph topology in the spectral domain. However, current studies suffer from several drawbacks: (1) graph CNNs rely on Chebyshev polynomial approximation which results in oscillatory approximation at jump discontinuities; (2) Increasing the order of Chebyshev polynomial can reduce the oscillations issue, but also incurs unaffordable computational cost; (3) Chebyshev polynomials require degree Ω(poly(1/ε)) to approximate a jump signal such as |x|, while rational function only needs O(poly log(1/ε)). However, it is non-trivial to apply rational approximation without increasing computational complexity due to the denominator. In this paper, the superiority of rational approximation is exploited for graph signal recovering. RatioanlNet is proposed to integrate rational function and neural networks. We show that the rational function of eigenvalues can be rewritten as a function of graph Laplacian, which can avoid multiplication by the eigenvector matrix. Focusing on the analysis of approximation on graph convolution operation, a graph signal regression task is formulated. Under graph signal regression task, its time complexity can be significantly reduced by graph Fourier transform. To overcome the local minimum problem of neural networks model, a relaxed Remez algorithm is utilized to initialize the weight parameters. Convergence rate of RatioanlNet and polynomial based methods on a jump signal is analyzed for a theoretical guarantee. The extensive experimental results demonstrated that our approach could effectively characterize the jump discontinuities, outperforming competing methods by a substantial margin on both synthetic and real-world graphs.
机译:对于节点级图编码,最近的一种重要的最新技术是图卷积网络(GCN),它可以在光谱域中很好地集成局部顶点特征和图拓扑。但是,当前的研究有几个缺点:(1)图的CNN依赖于Chebyshev多项式逼近,这导致跳跃不连续点处的振荡逼近; (2)增加切比雪夫多项式的阶数可以减少振荡问题,但也带来了无法承受的计算成本; (3)Chebyshev多项式要求度数Ω(poly(1 /ε))近似一个跳跃信号,例如| x |,而有理函数只需要O(poly log(1 /ε))。但是,在不因分母而增加计算复杂度的情况下应用有理逼近并非易事。本文利用有理逼近的优越性来恢复图信号。提出了RatioanlNet来集成有理函数和神经网络。我们表明,特征值的有理函数可以重写为图拉普拉斯函数,可以避免特征向量矩阵相乘。针对图卷积运算的近似分析,制定了图信号回归任务。在图信号回归任务下,通过图傅里叶变换可以显着降低其时间复杂度。为了克服神经网络模型的局部最小问题,采用了松弛的Remez算法来初始化权重参数。分析了RatioanlNet和基于多项式的方法在跳跃信号上的收敛速度,为理论提供了保证。大量的实验结果表明,我们的方法可以有效地描述跳跃的不连续性,在合成图和真实图上都比竞争方法好很多。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号