首页> 中文期刊>数据采集与处理 >机器学习随机优化方法的个体收敛性研究综述

机器学习随机优化方法的个体收敛性研究综述

     

摘要

The stochastic optimization algorithm is one of the state-of-the-art methods for solving largescale machine learning problems,where the focus is on whether or not the optimal convergence rate is derived and the learning structure is ensured.So far,various kinds of stochastic optimization algorithms have been presented for solving the regularized loss problems.However,most of them only discuss the convergence in terms of the averaged output,and even the simplest sparsity cannot be preserved.In contrast to the averaged output,the individual solution can keep the sparsity very well,and its optimal convergence rate is extensively explored as an open problem.On the other hand,the commonly-used assumption about unbiased gradient in stochastic optimization often does not hold in practice.In such cases,an astonishing fact is that the bias in the convergence bound of accelerated algorithms will accumulate with the iteration,and this makes the accelerated algorithms inapplicable.In this paper,an overview of the state-of-the-art and existing problems about the stochastic first-order gradient methods is given,which includes the individual convergence rate,biased gradient and nonconvex problems.Based on it,some interesting problems for future research are indicated.%随机优化方法是求解大规模机器学习问题的主流方法,其研究的焦点问题是算法是否达到最优收敛速率与能否保证学习问题的结构.目前,正则化损失函数问题已得到了众多形式的随机优化算法,但绝大多数只是对迭代进行平均的输出方式讨论了收敛速率,甚至无法保证最为典型的稀疏结构.与之不同的是,个体解能很好保持稀疏性,其最优收敛速率已经作为open问题被广泛探索.另外,随机优化普遍采用的梯度无偏假设往往不成立,加速方法收敛界中的偏差在有偏情形下会随迭代累积,从而无法应用.本文对一阶随机梯度方法的研究现状及存在的问题进行综述,其中包括个体收敛速率、梯度有偏情形以及非凸优化问题,并在此基础上指出了一些值得研究的问题.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号