There are two procedures for applying the method of conjugate gradients to the problem of minimizing a convex, nonlinear function: the “continued” method, and the “restarted” method in which all the data except the best previous point are discarded, and the procedure is begun anew from that point. It is demonstrated by example that in the absence of the standard initial starting condition on a quadratic function, the continued conjugate gradient method will converge to the solution no better than linearly. Furthermore, it is shown that for a general nonlinear function, the nonrestarted conjugate gradient method converges no worse than linearly.
展开▼