首页> 外文会议>Conference on uncertainty in artificial intelligence >An Upper Bound on the Global Optimum in Parameter Estimation
【24h】

An Upper Bound on the Global Optimum in Parameter Estimation

机译:参数估计中全局最优的上限

获取原文

摘要

Learning graphical model parameters from incomplete data is a non-convex optimization problem. Iterative algorithms, such as Expectation Maximization (EM), can be used to get a local optimum solution. However, little is known about the quality of the learned local optimum, compared to the unknown global optimum. We exploit variables that are always observed in the dataset to get an upper bound on the global optimum which can give insight into the quality of the parameters learned by estimation algorithms.
机译:从不完整数据学习图形模型参数是非凸优化问题。迭代算法,例如期望最大化(EM),可用于获得局部最佳解决方案。然而,与未知的全球最优值相比,关于所学知的局部最佳质量的众所周知。我们在数据集中始终观察到的变量,以获得全局最优的上限,可以深入了解估计算法学习的参数的质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号