首页> 外文期刊>SIAM Journal on Control and Optimization >PARALLEL GRADIENT DISTRIBUTION IN UNCONSTRAINED OPTIMIZATION
【24h】

PARALLEL GRADIENT DISTRIBUTION IN UNCONSTRAINED OPTIMIZATION

机译:无约束优化中的并行梯度分布

获取原文
获取原文并翻译 | 示例
           

摘要

A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton, or conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the k points found by the k processors. A more general synchronization step, applicable to convex as well as nonconvex functions, consists of taking the best point found by the Ic processors or any point that is better. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicates a speedup of the order of the number of processors employed. [References: 16]
机译:针对串行无约束优化的基本定理,提出了并行版本。并行定理允许k个并行处理器中的每一个同时使用不同的算法,例如下降,牛顿,拟牛顿或共轭梯度算法。每个处理器可以独立于其他处理器,对分配给它的目标函数的一部分梯度执行串行算法的一个或多个步骤。最后,执行同步步骤,对于可微分的凸函数,该同步步骤包括对k个处理器找到的k个点进行强凸组合。适用于凸函数和非凸函数的更通用的同步步骤包括获取Ic处理器找到的最佳点或任何更好的点。我们建立的基本结果是,对于非凸情况,并行算法的任何累加点都是固定的,对于凸情况,它是全局解。对Thinking Machines CM-5多处理器的计算测试表明,所采用的处理器数量加快了。 [参考:16]

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号