首页> 外文期刊>Computational optimization and applications >A class of ADMM-based algorithms for three-block separable convex programming
【24h】

A class of ADMM-based algorithms for three-block separable convex programming

机译:一类基于ADMM的三块可分离凸编程算法

获取原文
获取原文并翻译 | 示例
           

摘要

The alternating direction method of multipliers (ADMM) recently has found many applications in various domains whose models can be represented or reformulated as a separable convex minimization model with linear constraints and an objective function in sum of two functions without coupled variables. For more complicated applications that can only be represented by such a multi-block separable convex minimization model whose objective function is the sum of more than two functions without coupled variables, it was recently shown that the direct extension of ADMM is not necessarily convergent. On the other hand, despite the lack of convergence, the direct extension of ADMM is empirically efficient for many applications. Thus we are interested in such an algorithm that can be implemented as easily as the direct extension of ADMM, while with comparable or even better numerical performance and guaranteed convergence. In this paper, we suggest correcting the output of the direct extension of ADMM slightly by a simple correction step. The correction step is simple in the sense that it is completely free from step-size computing and its step size is bounded away from zero for any iterate. A prototype algorithm in this prediction-correction framework is proposed; and a unified and easily checkable condition to ensure the convergence of this prototype algorithm is given. Theoretically, we show the contraction property, prove the global convergence and establish the worst-case convergence rate measured by the iteration complexity for this prototype algorithm. The analysis is conducted in the variational inequality context. Then, based on this prototype algorithm, we propose a class of specific ADMM-based algorithms that can be used for three-block separable convex minimization models. Their numerical efficiency is verified by an image decomposition problem.
机译:乘法器(ADMM)的交替方向方法最近在各种域中发现了许多应用程序,其模型可以用线性约束和在没有耦合变量的两个功能之和中的可分离凸起最小化模型表示或重新结合。对于只能由这种多块可分离凸起最小化模型表示的更复杂的应用程序,其目标函数的总和在没有耦合变量的两种功能的总和,最近显示ADMM的直接扩展不一定会聚。另一方面,尽管缺乏趋同,但ADMM的直接延伸是对许多应用的经验有效。因此,我们对这种算法感兴趣,可以轻松实现ADMM的直接扩展,而具有可比或甚至更好的数值性能和保证收敛。在本文中,我们建议通过简单的校正步骤略微纠正ADMM的直接扩展输出。校正步骤的意义上是简单的,即它完全没有阶梯大小计算,并且其步长偏离零以供任何迭代。提出了该预测校正框架中的原型算法;和统一且易于检验的条件,以确保给出了该原型算法的收敛。从理论上讲,我们展示了收缩性,证明了全局收敛性,并建立了通​​过该原型算法的迭代复杂度来测量的最坏情况的收敛速度。分析在变分不等式上下文中进行。然后,基于该原型算法,我们提出了一类特定的基于ADMM的算法,可用于三个块可分离凸起最小化模型。通过图像分解问题验证了它们的数值效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号