首页> 外文期刊>Statistics and computing >Bayesian regularisation in structured additive regression: a unifying perspective on shrinkage, smoothing and predictor selection
【24h】

Bayesian regularisation in structured additive regression: a unifying perspective on shrinkage, smoothing and predictor selection

机译:结构化加性回归中的贝叶斯正则化:收缩,平滑和预测变量选择的统一观点

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

This paper surveys various shrinkage, smoothing and selection priors from a unifying perspective and shows how to combine them for Bayesian regularisation in the general class of structured additive regression models. As a common feature, all regularisation priors are conditionally Gaussian, given further parameters regularising model complexity. Hyperpriors for these parameters encourage shrinkage, smoothness or selection. It is shown that these regularisation (log-) priors can be interpreted as Bayesian analogues of several well-known frequentist penalty terms. Inference can be carried out with unified and computationally efficient MCMC schemes, estimating regularised regression coefficients and basis function coefficients simultaneously with complexity parameters and measuring uncertainty via corresponding marginal posteriors. For variable and function selection we discuss several variants of spike and slab priors which can also be cast into the framework of conditionally Gaussian priors. The performance of the Bayesian regularisation approaches is demonstrated in a hazard regression model and a high-dimensional geoadditive regression model.
机译:本文从统一的角度调查了各种收缩,平滑和选择先验,并展示了如何在结构化加性回归模型的一般类别中将它们组合以进行贝叶斯正则化。作为一个共同特征,给定参数进一步使模型复杂度正则化,所有正则化先验都是有条件的高斯条件。这些参数的优先级较高,可以鼓励收缩,平滑或选择。结果表明,这些正则化(对数)先验可以解释为几个著名的频繁罚词的贝叶斯类似物。可以使用统一且计算效率高的MCMC方案进行推理,同时估计正则化回归系数和基函数系数以及复杂性参数,并通过相应的边际后验来测量不确定性。对于变量和函数选择,我们讨论了尖峰和平板先验先验的几种变体,它们也可以转换为有条件的高斯先验框架。贝叶斯正则化方法的性能已在危险回归模型和高维地理叠加回归模型中得到证明。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号