首页> 美国政府科技报告 >Parallel Quasi-Newton Methods for Unconstrained Optimization
【24h】

Parallel Quasi-Newton Methods for Unconstrained Optimization

机译:无约束优化的并行拟牛顿法

获取原文

摘要

This document discusses methods for solving the unconstrained optimization problem on parallel computers, when the number of variables is sufficiently small that quasi-Newton methods can be used. The authors concentrate mainly, but not exclusively, on problems where function evaluation is expensive. First they discuss ways to parallelize both the function evaluation costs and the linear algebra calculations in the standard sequential secant method, the BFGS method. Then described are new methods that are appropriate when there are enough processors to evaluate the function, gradient, and part but not all of the Hessian at each iteration. Developed are new algorithms that utilize this information and analyze their convergence properties. The authors present computational experiments showing that they are superior to parallelization of either the BFGS method or Newton's method under our assumptions on the number of processors and cost of function evaluation. Finally they discuss ways to effectively utilize the gradient values at unsuccessful trial points that are available in our parallel methods and also in some sequential software packages.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号