首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Evolving Deep Neural Networks via Cooperative Coevolution With Backpropagation
【24h】

Evolving Deep Neural Networks via Cooperative Coevolution With Backpropagation

机译:通过具有BackProjagation的合作协会演变的深度神经网络

获取原文
获取原文并翻译 | 示例

摘要

Deep neural networks (DNNs), characterized by sophisticated architectures capable of learning a hierarchy of feature representations, have achieved remarkable successes in various applications. Learning DNN’s parameters is a crucial but challenging task that is commonly resolved by using gradient-based backpropagation (BP) methods. However, BP-based methods suffer from severe initialization sensitivity and proneness to getting trapped into inferior local optima. To address these issues, we propose a DNN learning framework that hybridizes CC-based optimization with BP-based gradient descent, called BPCC, and implement it by devising a computationally efficient CC-based optimization technique dedicated to DNN parameter learning. In BPCC, BP will intermittently execute for multiple training epochs. Whenever the execution of BP in a training epoch cannot sufficiently decrease the training objective function value, CC will kick in to execute by using the parameter values derived by BP as the starting point. The best parameter values obtained by CC will act as the starting point of BP in its next training epoch. In CC-based optimization, the overall parameter learning task is decomposed into many subtasks of learning a small portion of parameters. These subtasks are individually addressed in a cooperative manner. In this article, we treat neurons as basic decomposition units. Furthermore, to reduce the computational cost, we devise a maturity-based subtask selection strategy to selectively solve some subtasks of higher priority. Experimental results demonstrate the superiority of the proposed method over common-practice DNN parameter learning techniques.
机译:深度神经网络(DNN),其特征在于能够学习特征表示的层次结构的复杂架构,在各种应用中取得了显着的成功。学习DNN的参数是通过使用基于梯度的BackPropagation(BP)方法而通常解决的重要但具有挑战性的任务。然而,基于BP的方法遭受严重的初始化敏感性和尺寸,以被困到较差的本地Optima中。为了解决这些问题,我们提出了一种DNN学习框架,将基于BP的梯度下降,称为BPCC的基于BP的梯度下降,并通过设计专用于DNN参数学习的基于计算的基于CC的优化技术来涉及基于BP的梯度下降。在BPCC中,BP将间歇性地执行多个培训时期。每当在训练时期的BP执行BP时不能充分降低训练目标函数值,CC将通过使用BP导出的参数值作为起始点来开始执行。 CC获得的最佳参数值将充当BP的下一个训练时代的起点。在基于CC的优化中,整体参数学习任务被分解为学习一小部分参数的许多子特设。这些子任务以合作方式单独解决。在本文中,我们将神经元视为基本分解单元。此外,为了降低计算成本,我们设计了基于成熟的子任务选择策略,以选择性地解决更高优先级的一些子特设。实验结果表明,在共同实践DNN参数学习技术上提出的方法的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号