首页> 外文会议>International Joint Conference on Artificial Intelligence >Self-weighted Multiple Kernel Learning for Graph-based Clustering and Semi-supervised Classification
【24h】

Self-weighted Multiple Kernel Learning for Graph-based Clustering and Semi-supervised Classification

机译:基于图形的聚类和半监督分类的自加权多核学习

获取原文

摘要

Multiple kernel learning (MKL) method is generally believed to perform better than single kernel method. However, some empirical studies show that this is not always true: the combination of multiple kernels may even yield an even worse performance than using a single kernel. There are two possible reasons for the failure: (i) most existing MKL methods assume that the optimal kernel is a linear combination of base kernels, which may not hold true; and (ii) some kernel weights are inappropriately assigned due to noises and carelessly designed algorithms. In this paper, we propose a novel MKL framework by following two intuitive assumptions: (i) each kernel is a perturbation of the consensus kernel; and (ii) the kernel that is close to the consensus kernel should be assigned a large weight. Impressively, the proposed method can automatically assign an appropriate weight to each kernel without introducing additional parameters, as existing methods do. The proposed framework is integrated into a unified framework for graph-based clustering and semi-supervised classification. We have conducted experiments on multiple benchmark datasets and our empirical results verify the superiority of the proposed framework.
机译:通常认为多个内核学习(MKL)方法比单核法更好。然而,一些实证研究表明,这并不总是如此:多个内核的组合甚至可能比使用单个内核产生更差的性能。失败有两种可能的原因:(i)大多数现有的MKL方法假设最佳内核是基础内核的线性组合,这可能不存在; (ii)由于噪音和不小心设计的算法,某些内核重量不恰当地分配。在本文中,我们通过以下两种直观的假设提出了一种新的MKL框架:(i)每个内核是共识内核的扰动; (ii)靠近共识内核的内核应分配大量重量。令人印象深刻地,所提出的方法可以自动为每个内核自动为每个内核分配适当的权重,因为现有方法执行其他参数。所提出的框架集成到基于图形的聚类和半监督分类的统一框架中。我们对多个基准数据集进行了实验,我们的经验结果验证了所提出的框架的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号