首页> 外文会议>International Joint Conference on Neural Networks >Training Deep Neural Networks with Gradual Deconvexification
【24h】

Training Deep Neural Networks with Gradual Deconvexification

机译:培养深度神经网络,逐渐解构

获取原文

摘要

A new method of training deep neural networks including the convolutional network is proposed. The method deconvexifies the normalized risk-averting error (NRAE) gradually and switches to the risk-averting error (RAE) whenever RAE is computationally manageable. The method creates tunnels between the depressed regions around saddle points, tilts the plateaus, and eliminates nonglobal local minima. Numerical experiments show the effectiveness of gradual deconvexification as compared with unsupervised pretraining. After the minimization process, a statistical pruning method is used to enhance the generalization capability of the neural network under training. Numerical results show further reduction of the testing criterion.
机译:提出了一种培训包括卷积网络的深神经网络的新方法。该方法逐渐地将归一化风险verting误差(NAE)解构并切换到风险避免误差(RAE),每当RAE计算可管理时。该方法在鞍点周围的凹陷区域之间产生隧道,倾斜平台,并消除NongLobal局部最小值。与无人预测的预测相比,数值实验表明了逐渐解构的效果。在最小化过程之后,使用统计修剪方法来提高训练下神经网络的泛化能力。数值结果表明了测试标准的进一步减少。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号