【24h】

Bounding errors of Expectation-Propagation

机译:期望传播的边界误差

获取原文

摘要

Expectation Propagation is a very popular algorithm for variational inference, but comes with few theoretical guarantees. In this article, we prove that the approximation errors made by EP can be bounded. Our bounds have an asymptotic interpretation in the number n of datapoints, which allows us to study EP's convergence with respect to the true posterior. In particular, we show that EP converges at a rate of O(n~(-2)) for the mean, up to an order of magnitude faster than the traditional Gaussian approximation at the mode. We also give similar asymptotic expansions for moments of order 2 to 4, as well as excess Kullback-Leibler cost (defined as the additional KL cost incurred by using EP rather than the ideal Gaussian approximation). All these expansions highlight the superior convergence properties of EP. Our approach for deriving those results is likely applicable to many similar approximate inference methods. In addition, we introduce bounds on the moments of log-concave distributions that may be of independent interest.
机译:期望传播是一种非常流行的变分推理算法,但是理论上却很少保证。在本文中,我们证明了EP产生的近似误差是有界的。我们的边界在数据点的数量n中具有渐近解释,这使我们可以研究EP关于真实后验的收敛性。特别地,我们表明,EP的均值收敛速度为O(n〜(-2)),比该模式下的传统高斯近似速度快一个数量级。我们还给出了2到4阶矩的相似渐近展开,以及超额Kullback-Leibler成本(定义为使用EP而不是理想的高斯近似法所产生的额外KL成本)。所有这些扩展突出了EP的卓越收敛性。我们得出这些结果的方法可能适用于许多类似的近似推理方法。此外,我们介绍了可能具有独立利益的对数-凹形分布矩的边界。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号