首页> 外文期刊>Machine Learning >Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference
【24h】

Stochastic gradient Hamiltonian Monte Carlo with variance reduction for Bayesian inference

机译:贝叶斯推断具有方差约简的随机梯度哈密顿蒙特卡罗

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Gradient-based Monte Carlo sampling algorithms, like Langevin dynamics and Hamiltonian Monte Carlo, are important methods for Bayesian inference. In large-scale settings, full-gradients are not affordable and thus stochastic gradients evaluated on mini-batches are used as a replacement. In order to reduce the high variance of noisy stochastic gradients, Dubey et al.(in: Advances in neural information processing systems, pp 1154-1162, 2016) applied the standard variance reduction technique on stochastic gradient Langevin dynamics and obtained both theoretical and experimental improvements. In this paper, we apply the variance reduction tricks on Hamiltonian Monte Carlo and achieve better theoretical convergence results compared with the variance-reduced Langevin dynamics. Moreover, we apply the symmetric splitting scheme in our variance-reduced Hamiltonian Monte Carlo algorithms to further improve the theoretical results. The experimental results are also consistent with the theoretical results. As our experiment shows, variance-reduced Hamiltonian Monte Carlo demonstrates better performance than variance-reduced Langevin dynamics in Bayesian regression and classification tasks on real-world datasets.
机译:像Langevin动力学和Hamiltonian Monte Carlo这样的基于梯度的蒙特卡洛采样算法是贝叶斯推理的重要方法。在大规模设置中,全梯度是无法承受的,因此可以使用在迷你批次上评估的随机梯度来代替。为了减少嘈杂的随机梯度的高方差,Dubey等人(神经科学信息处理系统的进展,第1154-1162页,2016年)将标准方差减少技术应用于随机梯度Langevin动力学上,并获得了理论和实验改进。在本文中,我们将方差减少技巧应用于哈密顿量蒙特卡洛,与减少方差的兰格文动力学相比,获得了更好的理论收敛结果。此外,我们在减少方差的哈密顿蒙特卡罗算法中应用了对称分裂方案,以进一步改善理论结果。实验结果也与理论结果一致。如我们的实验所示,在真实数据集上的贝叶斯回归和分类任务中,减少方差的汉密尔顿蒙特卡洛方法比减少方差的Langevin动力学具有更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号