首页> 美国卫生研究院文献>Computational Intelligence and Neuroscience >Generalization Bounds for Coregularized Multiple Kernel Learning
【2h】

Generalization Bounds for Coregularized Multiple Kernel Learning

机译:核心化多核学习的泛化界线

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

Multiple kernel learning (MKL) as an approach to automated kernel selection plays an important role in machine learning. Some learning theories have been built to analyze the generalization of multiple kernel learning. However, less work has been studied on multiple kernel learning in the framework of semisupervised learning. In this paper, we analyze the generalization of multiple kernel learning in the framework of semisupervised multiview learning. We apply Rademacher chaos complexity to control the performance of the candidate class of coregularized multiple kernels and obtain the generalization error bound of coregularized multiple kernel learning. Furthermore, we show that the existing results about multiple kennel learning and coregularized kernel learning can be regarded as the special cases of our main results in this paper.
机译:多核学习(MKL)作为一种自动选择内核的方法,在机器学习中起着重要的作用。已经建立了一些学习理论来分析多核学习的泛化。然而,在半监督学习的框架内,关于多核学习的研究较少。在本文中,我们在半监督多视图学习的框架下分析了多核学习的泛化。我们应用Rademacher混沌复杂度控制可归类化多核候选类的性能,并获得可归化化多核学习的广义误差界。此外,我们表明,现有的关于多内核学习和核心化内核学习的结果可以视为本文主要结果的特例。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号