...
首页> 外文期刊>Neurocomputing >Convergence of online gradient method for feedforward neural networks with smoothing L_(1/2) regularization penalty
【24h】

Convergence of online gradient method for feedforward neural networks with smoothing L_(1/2) regularization penalty

机译:具有平滑L_(1/2)正则化惩罚的前馈神经网络在线梯度方法的收敛性

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Minimization of the training regularization term has been recognized as an important objective for sparse modeling and generalization in feedforward neural networks. Most of the studies so far have been focused on the popular L~2 regularization penalty. In this paper, we consider the convergence of online gradient method with smoothing L_(1/2) regularization term. For normal L_(1/2) regularization, the objective function is the sum of a non-convex, non-smooth, and non-Lipschitz function, which causes oscillation of the error function and the norm of gradient. However, using the smoothing approximation techniques, the deficiency of the normal L_(1/2) regularization term can be addressed. This paper shows the strong convergence results for the smoothing L_(1/2) regularization. Furthermore, we prove the boundedness of the weights during the network training. The assumption that weights are bounded is no longer needed for the proof of convergence. Simulation results support the theoretical findings and demonstrate that our algorithm has better performance than two other algorithms with L~2 and normal L_(1/2) regularizations respectively.
机译:训练正则项的最小化已被认为是前馈神经网络中稀疏建模和泛化的重要目标。迄今为止,大多数研究都集中在流行的L〜2正则化惩罚上。在本文中,我们考虑了使用平滑L_(1/2)正则化项的在线梯度方法的收敛性。对于正常的L_(1/2)正则化,目标函数是非凸,非平滑和非Lipschitz函数的和,这会引起误差函数和梯度范数的振荡。然而,使用平滑近似技术,可以解决正常L_(1/2)正则项的不足。本文显示了平滑L_(1/2)正则化的强收敛结果。此外,我们证明了网络训练过程中权重的有界性。收敛证明不再需要权重有界的假设。仿真结果支持了理论研究结果,并证明了我们的算法比其他两个具有L〜2和正常L_(1/2)正则化的算法具有更好的性能。

著录项

  • 来源
    《Neurocomputing》 |2014年第5期|208-216|共9页
  • 作者单位

    School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China,Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY 40292, USA;

    Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY 40292, USA,Spoleczna Akademia Nauk, 90-011 Lodz, Poland;

    School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Feedforward neural networks; Online gradient method; Smoothing L_(1/2) regularization; Boundedness; Convergence;

    机译:前馈神经网络;在线渐变法;平滑L_(1/2)正则化;有界收敛;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号