首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Efficient Gradient-Free Variational Inference using Policy Search
【24h】

Efficient Gradient-Free Variational Inference using Policy Search

机译:使用策略搜索的高效无梯度变分推理

获取原文
           

摘要

Inference from complex distributions is a common problem in machine learning needed for many Bayesian methods. We propose an efficient, gradient-free method for learning general GMM approximations of multimodal distributions based on recent insights from stochastic search methods. Our method establishes information-geometric trust regions to ensure efficient exploration of the sampling space and stability of the GMM updates, allowing for efficient estimation of multi-variate Gaussian variational distributions. For GMMs, we apply a variational lower bound to decompose the learning objective into sub-problems given by learning the individual mixture components and the coefficients. The number of mixture components is adapted online in order to allow for arbitrary exact approximations. We demonstrate on several domains that we can learn significantly better approximations than competing variational inference methods and that the quality of samples drawn from our approximations is on par with samples created by state-of-the-art MCMC samplers that require significantly more computational resources.
机译:在许多贝叶斯方法需要的机器学习中,复杂分布的推断是一个普遍的问题。我们基于随机搜索方法的最新见识,提出了一种有效的无梯度方法,用于学习多峰分布的通用GMM近似值。我们的方法建立了信息几何信任区域,以确保对采样空间的有效探索和GMM更新的稳定性,从而允许对多元高斯变异分布进行有效估计。对于GMM,我们应用变分下界将学习目标分解为通过学习各个混合成分和系数而给出的子问题。在线可以调整混合物成分的数量,以实现任意精确的近似值。我们在多个领域证明,与竞争性变分推理方法相比,我们可以学习到更好的近似值,并且从近似值中抽取的样本质量与需要大量计算资源的最新MCMC采样器创建的样本相当。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号