首页> 外文期刊>IEEE Transactions on Neural Networks >A new error function at hidden layers for past training of multilayer perceptrons
【24h】

A new error function at hidden layers for past training of multilayer perceptrons

机译:隐藏层的新错误函数,用于过去对多层感知器的训练

获取原文
获取原文并翻译 | 示例

摘要

This paper proposes a new error function at hidden layers to speed up the training of multilayer perceptrons (MLPs). With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence was obtained for MLPs without the stalling problem experienced in conventional LBL algorithms.
机译:本文提出了一种新的隐藏层误差函数,以加快多层感知器(MLP)的训练。利用这种新的隐藏错误功能,逐层(LBL)算法近似收敛于具有最佳学习率的错误反向传播算法。尤其是,隐藏权向量的最佳学习率大约是两个最佳因子的乘积,一个是最小化新的隐藏误差函数,另一个是分配隐藏目标。证明了所提出的误差函数对手写数字识别和孤立词识别任务的有效性。对于MLP,获得了非常快的学习收敛性,而没有传统LBL算法中遇到的停顿问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号