首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >A double incremental aggregated gradient method with linear convergence rate for large-scale optimization
【24h】

A double incremental aggregated gradient method with linear convergence rate for large-scale optimization

机译:具有线性收敛速度的双重增量聚合梯度方法用于大规模优化

获取原文

摘要

This paper considers the problem of minimizing the average of a finite set of strongly convex functions. We introduce a double incremental aggregated gradient method (DIAG) that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggregated average gradient of all the functions to approximate the full gradient. We prove that not only the proposed DIAG method converges linearly to the optimal solution, but also its linear convergence factor justifies the advantage of incremental methods on full batch gradient descent. In particular, we show theoretically and empirically that one pass of DIAG is more efficient than one iteration of gradient descent.
机译:本文考虑了最小化强凸函数有限集的平均值的问题。我们引入了一种双重增量聚合梯度方法(DIAG),该方法在每次迭代时仅计算一个函数的梯度,该方法是基于循环方案选择的,并使用所有函数的聚合平均梯度来近似整个梯度。我们证明,不仅提出的DIAG方法线性收敛到最优解,而且其线性收敛因子证明了增量方法在全批次梯度下降中的优势。特别是,我们从理论和经验上证明,DIAG的一次通过比梯度下降的一次迭代更有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号