首页> 外文期刊>IBM Journal of Research and Development >Linear Convergence of the Conjugate Gradient Method
【24h】

Linear Convergence of the Conjugate Gradient Method

机译:共轭梯度法的线性收敛

获取原文
           

摘要

There are two procedures for applying the method of conjugate gradients to the problem of minimizing a convex, nonlinear function: the “continued” method, and the “restarted” method in which all the data except the best previous point are discarded, and the procedure is begun anew from that point. It is demonstrated by example that in the absence of the standard initial starting condition on a quadratic function, the continued conjugate gradient method will converge to the solution no better than linearly. Furthermore, it is shown that for a general nonlinear function, the nonrestarted conjugate gradient method converges no worse than linearly.
机译:有两种方法可以将共轭梯度方法应用于最小化凸非线性函数的问题:“连续”方法和“重新开始”方法,其中丢弃了除最佳先前点以外的所有数据,以及该过程从那时开始重新开始。通过示例证明,在没有二次函数的标准初始起始条件的情况下,连续共轭梯度法将收敛到解上不会比线性收敛好。此外,表明对于一般的非线性函数,未重启的共轭梯度法收敛不比线性收敛差。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号