首页> 外文OA文献 >Priors on the Variance in Sparse Bayesian Learning; the demi-Bayesian Lasso
【2h】

Priors on the Variance in Sparse Bayesian Learning; the demi-Bayesian Lasso

机译:稀疏贝叶斯学习方差的先验;半球形贝叶斯套索

摘要

We explore the use of proper priors for variance parameters of certain sparse Bayesian regression models. This leads to a connection between sparse Bayesian learning (SBL) models (Tipping, 2001) and the recently proposed Bayesian Lasso (Park and Casella, 2008). We outline simple modifications of existing algorithms to solve this new variant which essentially uses type-II maximum likelihood to fit the Bayesian Lasso model. We also propose an Elastic-net (Zou and Hastie, 2005) heuristic to help with modeling correlated inputs. Experimental results show the proposals to compare favorably to both the Lasso and traditional and more recent sparse Bayesian algorithms.
机译:我们探索对某些稀疏贝叶斯回归模型的方差参数使用适当的先验。这导致了稀疏贝叶斯学习(SBL)模型(Tipping,2001)与最近提出的贝叶斯拉索(Park and Casella,2008)之间的联系。我们概述了现有算法的简单修改,以解决这个新变体,该变体实质上使用II型最大似然来拟合贝叶斯拉索模型。我们还提出了一种弹性网(Zou and Hastie,2005),以帮助建模相关输入。实验结果表明,该建议可与Lasso和传统的以及较新的稀疏贝叶斯算法进行比较。

著录项

  • 作者

    Madigan David;

  • 作者单位
  • 年度 2008
  • 总页数
  • 原文格式 PDF
  • 正文语种 English
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号