首页> 外文会议>IEEE International Conference on Systems, Man, and Cybernetics >Boosting and Residual Learning Scheme with Pseudoinverse Learners
【24h】

Boosting and Residual Learning Scheme with Pseudoinverse Learners

机译:伪学习者提升和剩余学习计划

获取原文

摘要

The traditional gradient descent based optimization algorithms for neural network are subjected too many vulnerabilities, such as slow convergent rate, gradient vanishing and falling into local minima. Therefore, the alternative non-gradient descent learning algorithm was proposed and prevalently applied in kinds of domains, such as pseudoinverse learning algorithm (PIL). However, when a special variant of the PIL, taking the random configuration of weight parameters, is adopted, the generalization ability needs further improvement although it has excellent training efficiency. Thus, on consideration of integrating the idea of ensemble learning, we proposes two methods to enhance basic PIL. One method is equivalent to an additive model, which can raise the network’s performance by introducing boosting mechanism, and the other is to adopt a recursive way to rectify the hidden layer output of the neural network, then the relative better model is used in the subsequent prediction. Comprehensive evaluating experiments are conducted on several datasets, and the experimental results illustrate that the our proposed methods are effective on the classification accuracy.
机译:对于神经网络的传统梯度下降基于阶段优化算法受到太多漏洞,例如缓慢的会聚速率,渐变消失并落入局部最小值。因此,提出了替代的非梯度下降学习算法,并以诸如伪学习算法(PIL)的各个域中普遍应用。然而,采用PI1的特殊变体采用,采用随机配置重量参数,概率能力进一步改善,尽管它具有出色的培训效率。因此,考虑到整合集合学习的想法,我们提出了两种加强基本PIL的方法。一种方法等同于添加模型,可以通过引入升压机制来提高网络的性能,另一个方法是通过引入升级机制来提高网络的性能,而另一个方法是采用递归方式来纠正神经网络的隐藏层输出,然后在随后使用相对更好的模型预言。综合评估实验在多个数据集上进行,实验结果说明了我们所提出的方法对分类准确性有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号