首页> 外文会议>International Conference on Automation and Computing >Investigation on the construction of the Relevance Vector Machine based on cross entropy minimization
【24h】

Investigation on the construction of the Relevance Vector Machine based on cross entropy minimization

机译:基于交叉熵最小化的相关矢量机建设的研究

获取原文

摘要

As a machine learning method under sparse Bayesian framework, classical Relevance Vector Machine (RVM) applies kernel methods to construct Radial Basis Function(RBF) networks using a least number of relevant basis functions. Compared to the well-known Support Vector Machine (SVM), the RVM provides a better sparsity, and an automatic estimation of hyperparameters. However, the performance of the original RVM purely depends on the smoothness of the presumed prior of the connection weights and parameters. Consequently, the sparsity is actually still controlled by the selection of kernel functions or kernel parameters. This may lead to severe underfitting or overfitting in some cases. In the research presented in this paper, we explicitly involve the number of basis functions into the objective of the optimization procedure, and construct the RVM by the minimization of the cross entropy between the “hypothetical” probability distribution in the forward training pathway and the “true” probability distribution in the backward testing pathway. The experimental results have shown that our proposed methodology can achieve both the least complexity of structure and goodness of appropriate fit to data.
机译:作为稀疏贝叶斯框架下的机器学习方法,经典相关矢量机(RVM)应用内核方法使用最小数量的相关基函数构造径向基函数(RBF)网络。与众所周知的支持向量机(SVM)相比,RVM提供了更好的稀疏性,并自动估计超参数。然而,原始RVM的性能纯粹取决于所假定在连接权重和参数之前的平滑度。因此,实际上仍然可以通过选择内核函数或内核参数来控制稀疏性。在某些情况下,这可能导致严重的垫底或过度拟合。在本文提出的研究中,我们明确地涉及基础函数的数量,进入优化程序的目的,并通过最小化“假设”概率分布之间的跨熵在前向培训途径和“的跨熵之间构建RVM。真实的“后向测试途径的概率分布。实验结果表明,我们所提出的方法可以实现结构的最小复杂性和适当拟合数据的良好性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号