首页> 外文期刊>IEEE Transactions on Neural Networks >Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations
【24h】

Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations

机译:使用广义Hebbian规则初始化内部表示形式来加速前馈神经网络的训练

获取原文
获取原文并翻译 | 示例

摘要

This paper presents an unsupervised learning scheme for initializing the internal representations of feedforward neural networks, which accelerates the convergence of supervised learning algorithms. It is proposed in this paper that the initial set of internal representations can be formed through a bottom-up unsupervised learning process applied before the top-down supervised training algorithm. The synaptic weights that connect the input of the network with the hidden units can be determined through linear or nonlinear variations of a generalized Hebbian learning rule, known as Oja's rule. Various generalized Hebbian rules were experimentally tested and evaluated in terms of their effect on the convergence of the supervised training process. Several experiments indicated that the use of the proposed initialization of the internal representations significantly improves the convergence of gradient-descent-based algorithms used to perform nontrivial training tasks. The improvement of the convergence becomes significant as the size and complexity of the training task increase.
机译:本文提出了一种用于初始化前馈神经网络内部表示的无监督学习方案,从而加快了有监督学习算法的收敛速度。本文提出可以通过在自上而下的有监督训练算法之前应用自下而上的无监督学习过程来形成内部表示的初始集合。可以通过广义的Hebbian学习规则(称为Oja规则)的线性或非线性变化来确定将网络输入与隐藏单元连接起来的突触权重。对各种通用的Hebbian规则进行了实验测试,并根据它们对监督训练过程的收敛性的影响进行了评估。几个实验表明,使用内部表示的拟议初始化显着改善了用于执行非平凡训练任务的基于梯度下降的算法的收敛性。随着训练任务的规模和复杂性的增加,收敛性的提高变得非常重要。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号