首页> 外文会议> >Asymptotic normality of posterior in consistent Bayesian learning
【24h】

Asymptotic normality of posterior in consistent Bayesian learning

机译:贝叶斯一致学习中后验的渐近正态性

获取原文

摘要

This paper presents the study of asymptotic normality of posterior in Bayesian learning from the point of view of computational learning theory. Three reduced regular conditions, which are more convenient to be used than Walker's and Heyde's, are presented. The theorem, which shows under certain regular condition the poster distribution is not only consistent, but also approximately normal, is proved. Since the computation of normal distribution is relatively simpler, then the results can become the theoretic foundation for further study, for instance, assigning results prior distribution and simplifying the computation in Bayesian learning.
机译:本文从计算学习理论的角度提出了贝叶斯学习中后验渐近正态性的研究。提出了三个简化的常规条件,这些条件比使用Walker和Heyde更方便。该定理证明了在一定规则条件下海报的分布不仅一致,而且近似正态。由于正态分布的计算相对简单,因此结果可以成为进一步研究的理论基础,例如,将结果分配给先验分布并简化贝叶斯学习中的计算。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号