首页> 外文会议>International Joint Conference on Neural Networks >Stochastic Variational Inference for Bayesian Sparse Gaussian Process Regression
【24h】

Stochastic Variational Inference for Bayesian Sparse Gaussian Process Regression

机译:贝叶斯稀疏高斯过程回归的随机变分推断

获取原文

摘要

This paper presents a novel variational inference framework for deriving a family of Bayesian sparse Gaussian process regression (SGPR) models whose approximations are variationally optimal with respect to the full-rank GPR model enriched with various corresponding correlation structures of the observation noises. Our variational Bayesian SGPR (VBSGPR) models jointly treat both the distributions of the inducing variables and hyperparameters as variational parameters, which enables the decomposability of the variational lower bound that in turn can be exploited for stochastic optimization. Such a stochastic optimization involves iteratively following the stochastic gradient of the variational lower bound to improve its estimates of the optimal variational distributions of the inducing variables and hyperparameters (and hence the predictive distribution) of our VBSGPR models and is guaranteed to achieve asymptotic convergence to them. We show that the stochastic gradient is an unbiased estimator of the exact gradient and can be computed in constant time per iteration, hence achieving scalability to big data. We empirically evaluate the performance of our proposed framework on two real-world, massive datasets.
机译:本文提出了一种新颖的变分推理框架,用于推导一系列贝叶斯稀疏高斯过程回归(SGPR)模型,该模型的逼近度相对于充实了观测噪声的各种相关结构的满秩GPR模型而言是最优的。我们的变分贝叶斯SGPR(VBSGPR)模型将诱导变量和超参数的分布共同视为变分参数,这使得变分下界的可分解性进而可以用于随机优化。这种随机优化涉及迭代跟随变化下限的随机梯度,以改善其对我们的VBSGPR模型的诱导变量和超参数(以及预测分布)的最佳变化分布的估计,并确保实现它们的渐近收敛。我们表明,随机梯度是精确梯度的无偏估计量,并且可以在每次迭代的恒定时间内进行计算,从而实现了对大数据的可伸缩性。我们根据经验评估了我们提出的框架在两个真实的海量数据集上的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号