首页> 外文会议>International Conference on Machine Learning >Convergence of a Stochastic Gradient Method with Momentum for Non-Smooth Non-Convex Optimization
【24h】

Convergence of a Stochastic Gradient Method with Momentum for Non-Smooth Non-Convex Optimization

机译:具有非平滑非凸优化动量的随机梯度法的收敛性

获取原文

摘要

Stochastic gradient methods with momentum are widely used in applications and at the core of optimization subroutines in many popular machine learning libraries. However, their sample complexities have not been obtained for problems beyond those that are convex or smooth. This paper establishes the convergence rate of a stochastic subgradient method with a momentum term of Polyak type for a broad class of non-smooth, non-convex, and constrained optimization problems. Our key innovation is the construction of a special Lyapunov function for which the proven complexity can be achieved without any tuning of the momentum parameter. For smooth problems, we extend the known complexity bound to the constrained case and demonstrate how the unconstrained case can be analyzed under weaker assumptions than the state-of-the-art. Numerical results confirm our theoretical developments.
机译:具有势头的随机梯度方法广泛用于应用中,并且在许多流行的机器学习库中的优化子程序中的核心。 然而,他们的样本复杂性尚未获得超出那些凸面或光滑的问题。 本文建立了具有多种非平滑,非凸起和约束优化问题的多济可术语动量项的随机地辐射方法的收敛速度。 我们的关键创新是建造特殊的Lyapunov功能,在没有任何调整的情况下可以实现经过验证的复杂性。 为了平稳问题,我们将已知的复杂性扩展到受限案例绑定,并证明如何在比最先进的假设下分析无约束案件。 数值结果证实了我们的理论发展。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号