In this paper, we present a new supermemory gradient method without line search for unconstrained optimization problems. The new method can guarantee a descent at each iteration. It sufficiently uses the previous multi-step iterative information at each iteration and avoids the storage and computation of matrices associated with the Hessian of objective functions, so that it is suitable to solve large scale optimization problems. We also prove its global convergence under some mild conditions. In addition, We analyze the linear convergence rate of the new method when the objective function is uniformly convex and twice continuously differentiable.
展开▼