首页> 外文会议>International Conference on Computing and Data Science >An Overview of Improved Gradient Descent Algorithms for DNN Training within Significant Revolutions of Training Frameworks
【24h】

An Overview of Improved Gradient Descent Algorithms for DNN Training within Significant Revolutions of Training Frameworks

机译:在训练框架的大量革命中改进了DNN训练的改进梯度下降算法的概述

获取原文

摘要

We absolutely need to train parameters for a given Deep Neural Networks (DNN). One of the most common training algorithms is Gradient Descent (GD). This article looks to bring back classical and emerging GD-based algorithms, against poor convergence, for DNN training within transforming training framework—non-distribution/distribution. We also concentrate on asynchronous/synchronous training framework under distributed training. Additionally, we introduce some evolutionary improved methods for deep learning as alternatives to GD, which have proved to be superior to GD in terms of convergence.
机译:我们绝对需要为给定深度神经网络(DNN)培训参数。 最常见的训练算法之一是梯度下降(GD)。 本文希望在转换训练框架 - 非分配/分布的转换培训框架内的DNN培训,对抗古典和新兴的GD基础算法。 我们还专注于分布式训练下的异步/同步训练框架。 此外,我们介绍了一些进化的改进方法,以便深入学习作为GD的替代方案,这已经证明在收敛方面被证明优于GD。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号