首页> 美国卫生研究院文献>other >An Expanded Theoretical Treatment of Iteration-Dependent Majorize-Minimize Algorithms
【2h】

An Expanded Theoretical Treatment of Iteration-Dependent Majorize-Minimize Algorithms

机译:迭代相关主观最小化算法的扩展理论处理

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The Majorize-Minimize (MM) optimization technique has received considerable attention in signal and image processing applications, as well as in the statistics literature. At each iteration of an MM algorithm, one constructs a tangent majorant function that majorizes the given cost function and is equal to it at the current iterate. The next iterate is obtained by minimizing this tangent majorant function, resulting in a sequence of iterates that reduces the cost function monotonically. A well-known special case of MM methods are Expectation-Maximization (EM) algorithms. In this paper, we expand on previous analyses of MM, due to [, ], that allowed the tangent majorants to be constructed in iteration-dependent ways. Also, in [], there was an error in one of the steps of the convergence proof that this paper overcomes.There are three main aspects in which our analysis builds upon previous work. Firstly, our treatment relaxes many assumptions related to the structure of the cost function, feasible set, and tangent majorants. For example, the cost function can be non-convex and the feasible set for the problem can be any convex set. Secondly, we propose convergence conditions, based on upper curvature bounds, that can be easier to verify than more standard continuity conditions. Furthermore, these conditions allow for considerable design freedom in the iteration-dependent behavior of the algorithm. Finally, we give an original characterization of the local region of convergence of MM algorithms based on connected (e.g., convex) tangent majorants. For such algorithms, cost function minimizers will locally attract the iterates over larger neighborhoods than is guaranteed typically with other methods. This expanded treatment widens the scope of MM algorithm designs that can be considered for signal and image processing applications, allows us to verify the convergent behavior of previously published algorithms, and gives a fuller understanding overall of how these algorithms behave.
机译:最小化(MM)优化技术已在信号和图像处理应用以及统计文献中引起了广泛关注。在MM算法的每次迭代中,都会构造一个切线主要函数,该函数将给定的成本函数主化并在当前迭代中等于它。通过最小化该切线主要函数获得下一个迭代,从而导致一系列迭代,单调地降低成本函数。 MM方法的一个众所周知的特殊情况是期望最大化(EM)算法。在本文中,由于[,],我们扩展了对MM的先前分析,从而允许以切线相关的方式构造切线主要点。同样,在[]中,本文克服的收敛证明步骤之一存在错误。我们的分析基于三个主要方面。首先,我们的处理放宽了与成本函数的结构,可行集和正切主要因素有关的许多假设。例如,成本函数可以是非凸的,而问题的可行集可以是任何凸集。其次,我们基于曲率上限提出收敛条件,比更多标准连续性条件更容易验证。此外,这些条件在算法的依赖于迭代的行为中允许相当大的设计自由度。最后,我们给出了基于连接的(例如凸的)切线主要成分的MM算法收敛局部区域的原始特征。对于这种算法,成本函数最小化器将比其他方法通常所保证的在更大的邻域上局部吸引迭代。这种扩展的处理方法扩大了可用于信号和图像处理应用的MM算法设计的范围,使我们能够验证以前发布的算法的收敛性,并全面了解这些算法的行为。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号