首页> 外文会议>International Conference on Machine Learning >Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring
【24h】

Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring

机译:通过随机梯度渔民进程的贝叶斯后部抽样

获取原文

摘要

In this paper we address the following question: "Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?". An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in.
机译:在本文中,我们解决了以下问题:“如果我们只允许触摸我们生成的每个样本的小型数据项,我们可以从贝叶斯后部分发中近似样本吗?”。先前提出了一种基于Langevin等式的算法,提出了解决这个问题,但其混合速率缓慢。通过利用贝叶斯中央极限定理,我们扩展了SGLD算法,使得在高混合速率下,它将采样从后后的正常近似,而对于缓慢混合速率,它将模拟SGLD与预调节矩阵的行为。作为奖励,所提出的算法让人想起Fisher评分(带有随机梯度),并且在烧伤期间是这样的有效优化器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号