首页> 外文期刊>Pattern recognition letters >Efficient and effective algorithms for training single-hidden-layer neural networks
【24h】

Efficient and effective algorithms for training single-hidden-layer neural networks

机译:训练单隐藏神经网络的高效算法

获取原文
获取原文并翻译 | 示例
       

摘要

Recently there have been renewed interests in single-hidden-layer neural networks (SHLNNs). This is due to its powerful modeling ability as well as the existence of some efficient learning algorithms. A prominent example of such algorithms is extreme learning machine (ELM), which assigns random values to the lower-layer weights. While ELM can be trained efficiently, it requires many more hidden units than is typically needed by the conventional neural networks to achieve matched classification accuracy. The use of a large number of hidden units translates to significantly increased test time, which is more valu able than training time in practice. In this paper, we propose a series of new efficient learning algorithms for SHLNNs. Our algorithms exploit both the structure of SHLNNs and the gradient information over all training epochs, and update the weights in the direction along which the overall square error is reduced the most. Experiments on the MNIST handwritten digit recognition task and the MAGIC gamma telescope dataset show that the algorithms proposed in this paper obtain significantly better classification accuracy than ELM when the same number of hidden units is used. For obtaining the same classification accuracy, our best algorithm requires only 1/16 of the model size and thus approximately 1/16 of test time compared with ELM. This huge advantage is gained at the expense of 5 times or less the training cost incurred by the ELM training.
机译:最近,人们对单隐层神经网络(SHLNN)有了新的兴趣。这是由于其强大的建模能力以及一些有效的学习算法的存在。这种算法的一个突出示例是极限学习机(ELM),它为低层权重分配随机值。虽然可以有效地训练ELM,但与传统的神经网络相比,它需要更多的隐藏单元来实现匹配的分类精度。使用大量隐藏单元会大大增加测试时间,这比实际训练时间更有价值。在本文中,我们提出了一系列针对SHLNN的新型高效学习算法。我们的算法在所有训练时期都利用了SHLNN的结构和梯度信息,并在最大程度减少总体平方误差的方向上更新了权重。在MNIST手写数字识别任务和MAGIC伽玛望远镜数据集上进行的实验表明,在使用相同数量的隐藏单位的情况下,本文提出的算法比ELM具有更好的分类精度。为了获得相同的分类精度,我们最好的算法仅需要模型大小的1/16,因此与ELM相比,大约需要测试时间的1/16。以ELM培训所产生的培训成本的5倍或更少的成本就可以获得这种巨大的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号