...
首页> 外文期刊>Applied mathematics and computation >Approximating the marginal likelihood estimate for models with random parameters
【24h】

Approximating the marginal likelihood estimate for models with random parameters

机译:估计具有随机参数的模型的边际似然估计

获取原文
获取原文并翻译 | 示例

摘要

Often a model for the mean and variance of a measurement set is naturally expressed in terms of both deterministic and random parameters. Each of the deterministic parameters has one fixed value while the random parameters come from a distribution of values. We restrict our attention to the case where the random parameters and the measurement error have a Gaussian distribution. In this case, the joint likelihood of the data and random parameters is an extended least squares function. The likelihood of the data alone is the integral of this extended least squares function with respect to the random parameters. This is the likelihood that we would like to optimize, but we do not have a closed form expression for the integral. We use Laplace's method to obtain an approximation for the likelihood of the data alone. Maximizing this approximation is less computationally demanding than maximizing the integral expression, but this yields a different estimator. In addition, evaluation of the approximation requires second derivatives of the original model functions. If we were to use this approximation as our objective function, evaluation of the derivative of the objective would require third derivatives of the original model functions. We present modified approximations that are expressed using only values of the original model functions. Evaluation of the derivative of the modified approximations only requires first derivatives of the original model functions. We use Monte Carlo techniques to approximate the difference between an arbitrary estimator and the estimator that maximizes the likelihood of the data alone. In addition, we approximate the information matrix corresponding to the estimator that maximizes the likelihood of the data alone. (C) 2001 Elsevier Science Inc. All rights reserved. [References: 19]
机译:通常,根据确定性参数和随机参数自然表达用于测量集的均值和方差的模型。每个确定性参数都有一个固定值,而随机参数则来自值的分布。我们将注意力集中在随机参数和测量误差具有高斯分布的情况下。在这种情况下,数据和随机参数的联合似然性是扩展的最小二乘函数。相对于随机参数,仅数据的可能性就是该扩展的最小二乘函数的积分。这是我们想优化的可能性,但是我们没有积分的封闭形式。我们使用拉普拉斯(Laplace)方法获得仅数据似然性的近似值。与最大化积分表达式相比,最大化此近似值对计算的要求较小,但这会产生不同的估计量。此外,对近似值的评估需要原始模型函数的二阶导数。如果我们将此近似值用作目标函数,则对目标导数的评估将需要原始模型函数的三阶导数。我们提出了仅使用原始模型函数值表示的修改后的近似值。评估修改后的近似值的导数仅需要原始模型函数的一阶导数。我们使用蒙特卡洛技术来近似估计任意估计量和使数据单独出现的可能性最大化的估计量之间的差异。另外,我们对与估计器相对应的信息矩阵进行近似,从而使单独数据的可能性最大化。 (C)2001 Elsevier Science Inc.保留所有权利。 [参考:19]

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号