首页> 美国卫生研究院文献>other >Proximal Distance Algorithms: Theory and Practice
【2h】

Proximal Distance Algorithms: Theory and Practice

机译:近距离算法:理论与实践

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Proximal distance algorithms combine the classical penalty method of constrained minimization with distance majorization. If f(x) is the loss function, and C is the constraint set in a constrained minimization problem, then the proximal distance principle mandates minimizing the penalized loss f(x)+ρ2dist(x,C)2 and following the solution xρ to its limit as ρ tends to ∞. At each iteration the squared Euclidean distance dist(x,C)2 is majorized by the spherical quadratic ‖x− PC(xk)‖2, where PC(xk) denotes the projection of the current iterate xk onto C. The minimum of the surrogate function f(x)+ρ2xPC(xk)2 is given by the proximal map proxρ−1f[PC(xk)]. The next iterate xk+1 automatically decreases the original penalized loss for fixed ρ. Since many explicit projections and proximal maps are known, it is straightforward to derive and implement novel optimization algorithms in this setting. These algorithms can take hundreds if not thousands of iterations to converge, but the simple nature of each iteration makes proximal distance algorithms competitive with traditional algorithms. For convex problems, proximal distance algorithms reduce to proximal gradient algorithms and therefore enjoy well understood convergence properties. For nonconvex problems, one can attack convergence by invoking Zangwill’s theorem. Our numerical examples demonstrate the utility of proximal distance algorithms in various high-dimensional settings, including a) linear programming, b) constrained least squares, c) projection to the closest kinship matrix, d) projection onto a second-order cone constraint, e) calculation of Horn’s copositive matrix index, f) linear complementarity programming, and g) sparse principal components analysis. The proximal distance algorithm in each case is competitive or superior in speed to traditional methods such as the interior point method and the alternating direction method of multipliers (ADMM). Source code for the numerical examples can be found at .
机译:近距离算法将约束最小化和距离主化的经典惩罚方法结合在一起。如果f(x)是损失函数,而C是约束最小化问题中的约束集,则近距离距离原理要求最小化损失损失 f x + ρ 2 dist x C 2 <并随着x趋于∞而遵循解xρ到其极限。在每次迭代中,平方欧几里德距离dist(x,C) 2 由球面二次“ x− PC(xk)” 2 来表示,其中PC(xk)表示当前迭代 x k C 上的投影。替代函数的最小值 f x + ρ 2 x P C x k 2 由近端地图prox ρ −1给出 f [ P C x k )]。下一个迭代的 x k +1自动减少固定ρ的原始损失。由于已知许多显式投影和近端贴图,因此在这种情况下可以轻松导出和实现新颖的优化算法。这些算法可能需要数百甚至数千次迭代才能收敛,但是每次迭代的简单性质使近距离算法与传统算法具有竞争优势。对于凸问题,近端距离算法简化为近端梯度算法,因此享有众所周知的收敛特性。对于非凸问题,可以通过调用Zangwill定理来攻击收敛。我们的数值示例演示了近距离算法在各种高维设置中的实用性,包括a)线性规划,b)约束最小二乘,c)投影到最接近的亲属关系矩阵,d)投影到二阶锥约束,e )计算霍恩的正矩阵索引,f)线性互补规划,以及g)稀疏主成分分析。在每种情况下,近端距离算法在速度上都比传统方法(例如内点法和乘数交替方向法(ADMM))更具竞争力或更高。可以在上找到有关数字示例的源代码。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号