首页> 外文期刊>Mathematical Programming >Optimized first-order methods for smooth convex minimization
【24h】

Optimized first-order methods for smooth convex minimization

机译:优化的一阶方法用于平滑凸极小化

获取原文
获取原文并翻译 | 示例
           

摘要

We introduce new optimized first-order methods for smooth unconstrained convex minimization. Drori and Teboulle (Math Program 145(1-2):451-482, 2014. doi: 10.1007/s10107-013-0653-0"10.1007/s10107-013-0653-0" TargetType=) recently described a numerical method for computing the N-iteration optimal step coefficients in a class of first-order algorithms that includes gradient methods, heavy-ball methods (Polyak in USSR Comput Math Math Phys 4(5):1-17, 1964. doi: 10.1016/0041-5553(64)90137-510.1016/0041-5553(64)90137-5" TargetType="DOI), and Nesterov's fast gradient methods (Nesterov in Sov Math Dokl 27(2):372-376, 1983; Math Program 103(1):127-152, 2005. doi: 10.1007/s10107-004-0552-510.1007/s10107-004-0552-5" TargetType=). However, the numerical method in Drori and Teboulle (2014) is computationally expensive for large N, and the corresponding numerically optimized first-order algorithm in Drori and Teboulle (2014) requires impractical memory and computation for large-scale optimization problems. In this paper, we propose optimized first-order algorithms that achieve a convergence bound that is two times smaller than for Nesterov's fast gradient methods; our bound is found analytically and refines the numerical bound in Drori and Teboulle (2014). Furthermore, the proposed optimized first-order methods have efficient forms that are remarkably similar to Nesterov's fast gradient methods.
机译:我们引入了用于平滑无约束凸最小化的新的优化一阶方法。 Drori和Teboulle(Math Program 145(1-2):451-482,2014. doi:10.1007 / s10107-013-0653-0“ 10.1007 / s10107-013-0653-0” TargetType =)最近描述了一种用于在一类包括梯度法,重球法的一阶算法中计算N迭代最佳步长系数(Polyak in苏联Comput Math Math Phys 4(5):1-17,1964. doi:10.1016 / 0041- 5553(64)90137-510.1016 / 0041-5553(64)90137-5“ TargetType =” DOI)和Nesterov的快速梯度方法(Nesterov in Sov Math Dokl 27(2):372-376,1983; Math Program 103( 1):127-152,2005年。doi:10.1007 / s10107-004-0552-510.1007 / s10107-004-0552-5“ TargetType =)。但是,Drori和Teboulle(2014)中的数值方法对于大型计算而言是昂贵的N,而Drori和Teboulle(2014)中相应的数值优化一阶算法需要不切实际的内存和计算来解决大规模优化问题,本文提出了一种实现收敛的优化一阶算法。边界比Nesterov的快速梯度方法小两倍;通过分析可以找到我们的界限,并完善了Drori和Teboulle(2014)中的数值界限。此外,所提出的优化的一阶方法具有有效的形式,这些形式与内斯特罗夫的快速梯度法非常相似。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号