首页> 外文会议>Annual Conference on Learning Theory(COLT 2006); 20060622-25; Pittsburgh,PA(US) >Unifying Divergence Minimization and Statistical Inference Via Convex Duality
【24h】

Unifying Divergence Minimization and Statistical Inference Via Convex Duality

机译:通过凸对偶统一发散最小化和统计推断

获取原文
获取原文并翻译 | 示例

摘要

In this paper we unify divergence minimization and statistical inference by means of convex duality. In the process of doing so, we prove that the dual of approximate maximum entropy estimation is maximum a posteriori estimation as a special case. Moreover, our treatment leads to stability and convergence bounds for many statistical learning problems. Finally, we show how an algorithm by Zhang can be used to solve this class of optimization problems efficiently.
机译:在本文中,我们通过凸对偶性将方差最小化和统计推断统一起来。在这样做的过程中,我们证明了近似最大熵估计的对偶是特殊情况下的最大后验估计。而且,我们的处理导致许多统计学习问题的稳定性和收敛性边界。最后,我们展示如何使用Zhang的算法来有效解决此类优化问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号