首页> 外文期刊>IEEE Transactions on Circuits and Systems >Analysis of gradient descent learning algorithms for multilayer feedforward neural networks
【24h】

Analysis of gradient descent learning algorithms for multilayer feedforward neural networks

机译:多层前馈神经网络的梯度下降学习算法分析

获取原文
获取原文并翻译 | 示例

摘要

Certain dynamical properties of gradient-type learning algorithms as they apply to multilayer feedforward neural networks are investigated. These properties are more related to the multilayer structure of the net than to the particular threshold units at the nodes. The analysis explains the empirical observation that the weight sequence generated by backpropagation and related stochastic gradient algorithms exhibits a long-term dependence on the initial choice of weights, and also a continued growth and/or drift long after the outputs have converged. The analysis is carried out in two steps. First, a simplified deterministic algorithm is derived using a describing function-type approach. Next, an analysis of the simplified algorithm is performed by considering an associated ordinary differential equation (ODE). Some numerical examples are given to illustrate the analysis. The dynamical behavior of backpropagation and related algorithms for the training of multilayer nets is discussed.
机译:研究了梯度型学习算法应用于多层前馈神经网络的某些动力学特性。这些属性与网的多层结构有关,而不是与节点处的特定阈值单位有关。该分析解释了经验观察,即由反向传播和相关的随机梯度算法生成的权重序列表现出对权重初始选择的长期依赖性,并且在输出收敛之后很长一段时间仍持续增长和/或漂移。分析分两个步骤进行。首先,使用描述性函数类型方法得出简化的确定性算法。接下来,通过考虑相关的常微分方程(ODE)对简化算法进行分析。给出一些数值例子来说明分析。讨论了反向传播的动力学行为以及用于训练多层网络的相关算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号