...
首页> 外文期刊>Neural processing letters >Enhance the Performance of Deep Neural Networks via L2 Regularization on the Input of Activations
【24h】

Enhance the Performance of Deep Neural Networks via L2 Regularization on the Input of Activations

机译:通过激活输入的L2正则化增强深层神经网络的性能

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Deep neural networks (DNNs) are witnessing increasing attention in machine learning. However, the information propagation is becoming increasingly difficult as the networks get deeper, which makes the optimization of DNN extremely hard. One reason of this difficulty is saturation of hidden units. In this paper, we propose a novel methodology named RegA to decrease the influences of saturation on ReLU-DNNs (DNNs with ReLU). Instead of changing the activation functions or the initialization strategy, our methodology explicitly encourage the pre-activation to be out of the saturation region. Specifically, we add an auxiliary objective induced by L2-norm of the pre-activation values to the optimization problem. The auxiliary objective could help to active more units and promote effective information propagation in ReLU-DNNs. By conducting experiments on several large-scale real datasets, we demonstrate better representations could be learned by using RegA and the method help ReLU-DNNs get better performance on convergence and accuracy.
机译:深度神经网络(DNN)在机器学习中正受到越来越多的关注。但是,随着网络的深入,信息传播变得越来越困难,这使得DNN的优化变得异常困难。造成这种困难的原因之一是隐藏单元的饱和​​。在本文中,我们提出了一种名为RegA的新方法,以减少饱和度对ReLU-DNN(带有ReLU的DNN)的影响。我们的方法不是更改激活功能或初始化策略,而是明确鼓励预激活不在饱和区域之内。具体来说,我们将由激活前值的L2-范数引起的辅助目标添加到优化问题中。辅助目标可以帮助激活更多单位,并促进ReLU-DNN中有效的信息传播。通过在几个大型真实数据集上进行实验,我们证明了使用RegA可以学习更好的表示方法,该方法有助于ReLU-DNN在收敛性和准确性上获得更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号