首页> 外文OA文献 >Conditional Density Approximations with Mixtures of Polynomials
【2h】

Conditional Density Approximations with Mixtures of Polynomials

机译:多项式混合的条件密度近似

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate and study the methods using data sampled from known parametric distributions, and we demonstrate their applicability by learning models based on real neuroscience data. Finally, we compare the performance of the proposed methods with an approach for learning mixtures of truncated basis functions (MoTBFs). The empirical results show that the proposed methods generally yield models that are comparable to or significantly better than those found using the MoTBF-based method.
机译:多项式(MoP)的混合是一种非参数密度估计技术,专门为具有连续变量和离散变量的混合贝叶斯网络而设计。最近提出了从数据中学习一维和多维(边际)MoP的算法。在本文中,我们介绍了两种从数据中学习条件密度的MoP近似的方法。两种方法均基于学习关节密度和条件变量的边际密度的MoP近似值,但是它们在如何找到两个密度的商的MoP近似值方面有所不同。我们使用从已知参数分布中采样的数据来说明和研究这些方法,并通过基于真实神经科学数据的学习模型来证明它们的适用性。最后,我们将提出的方法的性能与用于学习截断基函数(MoTBF)混合的方法进行比较。实证结果表明,所提出的方法所产生的模型通常与使用基于MoTBF的方法所得出的模型相当或更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号