首页> 外文期刊>Journal of statistical computation and simulation >Convergence of Griddy Gibbs sampling and other perturbed Markov chains
【24h】

Convergence of Griddy Gibbs sampling and other perturbed Markov chains

机译:Griddy Gibbs采样和其他扰动的Markov链的收敛性

获取原文
获取原文并翻译 | 示例

摘要

The Griddy Gibbs sampling was proposed by Ritter and Tanner [Facilitating the Gibbs Sampler: the Gibbs Stopper and the Griddy-Gibbs Sampler. J Am Stat Assoc. 1992; 87(419): 861-868] as a computationally efficient approximation of the well-known Gibbs sampling method. The algorithm is simple and effective and has been used successfully to address problems in various fields of applied science. However, the approximate nature of the algorithm has prevented it from being widely used: the Markov chains generated by the Griddy Gibbs sampling method are not reversible in general, so the existence and uniqueness of its invariant measure is not guaranteed. Evenwhen such an invariant measure uniquely exists, there was no estimate of the distance between it and the probability distribution of interest, hence no means to ensure the validity of the algorithm as a means to sample from the true distribution. In this paper, we show, subject to some fairly natural conditions, that the Griddy Gibbs method has a unique, invariant measure. Moreover, we provide Lp estimates on the distance between this invariant measure and the corresponding measure obtained from Gibbs sampling. These results provide a theoretical foundation for the use of the Griddy Gibbs sampling method. We also address a more general result about the sensitivity of invariant measures under small perturbations on the transition probability. That is, if we replace the transition probability P of any Monte Carlo Markov chain by another transition probability Q where Q is close to P, we can still estimate the distance between the two invariant measures. The distinguishing feature between our approach and previous work on convergence of perturbed Markov chain is that by considering the invariant measures as fixed points of linear operators on function spaces, we do not need to impose any further conditions on the rate of convergence of the Markov chain. For example, the results we derived in this paper can address the case when the considered Monte Carlo Markov chains are not uniformly ergodic.
机译:Griddy Gibbs采样由Ritter和Tanner提出[促进Gibbs采样器:Gibbs Stopper和Griddy-Gibbs采样器。 J Am Stat Assoc。 1992年; 87(419):861-868]作为众所周知的吉布斯采样方法的计算有效近似。该算法简单有效,已成功用于解决应用科学各个领域的问题。但是,该算法的近似性质阻止了它的广泛应用:网格吉布斯采样方法生成的马尔可夫链通常不可逆,因此不能保证其不变性度量的存在和唯一性。即使这种不变性度量唯一存在,也无法估计其与感兴趣的概率分布之间的距离,因此也没有办法确保算法作为从真实分布中进行抽样的手段的有效性。在本文中,我们证明,在某些自然条件下,Griddy Gibbs方法具有唯一不变的度量。此外,我们提供了该不变测度与从吉布斯采样获得的相应测度之间距离的Lp估计。这些结果为使用Griddy Gibbs采样方法提供了理论基础。我们还讨论了关于不变概率在小扰动下对转移概率的敏感性的更一般的结果。也就是说,如果我们用Q接近P的另一个转移概率Q替换任何蒙特卡洛马尔可夫链的转移概率P,我们仍然可以估计两个不变量度之间的距离。我们的方法与先前关于扰动马尔可夫链收敛的研究之间的区别在于,通过将不变性作为函数空间上线性算子的不动点,我们不需要对马尔可夫链的收敛速度施加任何其他条件。 。例如,我们在本文中得出的结果可以解决当考虑的Monte Carlo Markov链不是均匀遍历时的情况。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号