首页> 外文OA文献 >An Enhanced Optimization Scheme Based on Gradient Descent Methods for Machine Learning
【2h】

An Enhanced Optimization Scheme Based on Gradient Descent Methods for Machine Learning

机译:基于机器学习的梯度下降方法的增强优化方案

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

A The learning process of machine learning consists of finding values of unknown weights in a cost function by minimizing the cost function based on learning data. However, since the cost function is not convex, it is conundrum to find the minimum value of the cost function. The existing methods used to find the minimum values usually use the first derivative of the cost function. When even the local minimum (but not a global minimum) is reached, since the first derivative of the cost function becomes zero, the methods give the local minimum values, so that the desired global minimum cannot be found. To overcome this problem, in this paper we modified one of the existing schemes—the adaptive momentum estimation scheme—by adding a new term, so that it can prevent the new optimizer from staying at local minimum. The convergence condition for the proposed scheme and the convergence value are also analyzed, and further explained through several numerical experiments whose cost function is non-convex.
机译:机器学习的学习过程包括通过最小化基于学习数据的成本函数来找到成本函数的未知权重的值。但是,由于成本函数未凸出,因此难题可以找到成本函数的最小值。用于找到最小值的现有方法通常使用成本函数的第一个导数。甚至达到局部最小值(但不是全局最小值)时,由于成本函数的第一个导数变为零,因此该方法提供了局部最小值,从而无法找到所需的全局最小值。为了克服这个问题,在本文中,我们通过添加新术语修改了现有方案 - 自适应动量估计方案 - 以便它可以防止新的优化器停留在局部最小值。还分析了所提出的方案和收敛值的收敛条件,并通过若干数值实验进一步解释,其成本函数是非凸的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号