首页> 外文期刊>IEEE Transactions on Signal Processing >Space-alternating generalized expectation-maximization algorithm
【24h】

Space-alternating generalized expectation-maximization algorithm

机译:空间交替广义期望最大化算法

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. The paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. The authors prove that the sequence of estimates monotonically increases the penalized-likelihood objective, derive asymptotic convergence rates, and provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, the SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.
机译:期望最大化(EM)方法可以促进最大化统计估计问题中出现的似然函数。在经典的EM范式中,一个迭代地使单个不可观察的完整数据空间的条件对数似然性最大化,而不是使所测数据或不完整数据的难处理似然函数最大化。 EM算法会同时更新所有参数,这有两个缺点:1)收敛速度慢,以及2)使用平滑度惩罚时由于耦合而导致的困难的最大化步骤。本文介绍了空间交替通用EM(SAGE)方法,该方法通过在算法设计者定义的几个小的隐藏数据空间之间交替来顺序更新参数。作者证明,估计序列单调增加了惩罚似然目标,得出了渐近收敛率,并为规范中的单调收敛提供了充分条件。有两个信号处理应用程序说明了该方法:估计高斯噪声中的叠加信号,以及从泊松测量中重建图像。在这两种应用中,SAGE算法都比EM算法更容易适应平滑度损失并且收敛速度更快。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号