...
首页> 外文期刊>Neural processing letters >Entropy Regularized Likelihood Learning on Gaussian Mixture: Two Gradient Implementations for Automatic Model Selection
【24h】

Entropy Regularized Likelihood Learning on Gaussian Mixture: Two Gradient Implementations for Automatic Model Selection

机译:高斯混合的熵正则化似然学习:自动模型选择的两种梯度实现

获取原文
获取原文并翻译 | 示例
           

摘要

In Gaussian mixture modeling, it is crucial to select the number of Gaussians or mixture model for a sample data set. Under regularization theory, we aim to solve this kind of model selection problem through implementing entropy regularized likelihood (ERL) learning on Gaussian mixture via a batch gradient learning algorithm. It is demonstrated by the simulation experiments that this gradient ERL learning algorithm can select an appropriate number of Gaussians automatically during the parameter learning on a sample data set and lead to a good estimation of the parameters in the actual Gaussian mixture, even in the cases of two or more actual Gaussians overlapped strongly. We further give an adaptive gradient implementation of the ERL learning on Gaussian mixture followed with theoretic analysis, and find a mechanism of generalized competitive learning implied in the ERL learning.
机译:在高斯混合建模中,至关重要的是为样本数据集选择高斯或混合模型的数量。在正则化理论下,我们旨在通过批梯度学习算法在高斯混合上实施熵正则似然(ERL)学习来解决此类模型选择问题。通过仿真实验证明,这种梯度ERL学习算法可以在对样本数据集进行参数学习的过程中自动选择适当数量的高斯,即使在以下情况下,也可以很好地估计实际高斯混合中的参数:两个或多个实际的高斯人强烈重叠。在理论分析的基础上,我们进一步给出了对高斯混合的ERL学习的自适应梯度实现,并找到了ERL学习中隐含的广义竞争学习的机制。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号