首页> 外文期刊>IEEE Transactions on Neural Networks >Comments on 'Accelerated learning algorithm for multilayer perceptrons: optimization layer by layer'
【24h】

Comments on 'Accelerated learning algorithm for multilayer perceptrons: optimization layer by layer'

机译:关于“多层感知器的加速学习算法:逐层优化”的评论

获取原文
获取原文并翻译 | 示例
       

摘要

In the above paper by Ergezinger and Thomsen (ibid. vol.6 (1991)), a new method for training multilayer perceptron, called optimization layer by layer (OLL), was introduced. The present paper analyzes the performance of OLL. We show, from theoretical considerations, that the amount of work required with OLL-learning scales as the third power of the network size, compared with the square of the network size for commonly used conjugate gradient (CG) training algorithms. This theoretical estimate is confirmed through a practical example. Thus, although OLL is shown to function very well for small neural networks (less than about 500 weights per layer), it is slower than CG for large neural networks. Next, we show that OLL does not always improve on the accuracy that can be obtained with CG. It seems that the final accuracy that can be obtained depends strongly on the initial network weights.
机译:在以上Ergezinger和Thomsen的论文中(同上,第6卷(1991)),介绍了一种用于训练多层感知器的新方法,称为优化逐层优化(OLL)。本文分析了OLL的性能。我们从理论上表明,与常用共轭梯度(CG)训练算法的网络规模的平方相比,OLL学习规模作为网络规模的三次方所需要的工作量。通过一个实际的例子可以证实这一理论估计。因此,尽管OLL在小型神经网络(每层小于约500的权重)中显示出很好的功能,但对于大型神经网络,它比CG慢。接下来,我们证明OLL并不总是会提高CG可获得的准确性。似乎可以达到的最终精度在很大程度上取决于初始网络权重。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号