...
【24h】

Accelerated max-margin multiple kernel learning

机译:加速最大余量多核学习

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Kernel machines such as Support Vector Machines (SVM) have exhibited successful performance in pattern classification problems mainly due to their exploitation of potentially nonlinear affinity structures of data through the kernel functions. Hence, selecting an appropriate kernel function, equivalently learning the kernel parameters accurately, has a crucial impact on the classification performance of the kernel machines. In this paper we consider the problem of learning a kernel matrix in a binary classification setup, where the hypothesis kernel family is represented as a convex hull of fixed basis kernels. While many existing approaches involve computationally intensive quadratic or semi-definite optimization, we propose novel kernel learning algorithms based on large margin estimation of Parzen window classifiers. The optimization is cast as instances of linear programming. This significantly reduces the complexity of the kernel learning compared to existing methods, while our large margin based formulation provides tight upper bounds on the generalization error. We empirically demonstrate that the new kernel learning methods maintain or improve the accuracy of the existing classification algorithms while significantly reducing the learning time on many real datasets in both supervised and semi-supervised settings.
机译:诸如支持向量机(SVM)之类的内核机器已在模式分类问题中显示出成功的性能,这主要是由于它们通过内核函数对数据的潜在非线性亲和力结构的利用。因此,选择适当的内核功能,等效地准确地学习内核参数,对内核机器的分类性能具有至关重要的影响。在本文中,我们考虑了在二进制分类设置中学习内核矩阵的问题,其中假设内核家族表示为固定基础内核的凸包。虽然许多现有方法都涉及计算密集型的二次或半定优化,但我们提出了基于Parzen窗口分类器的大余量估计的新颖内核学习算法。该优化被视为线性规划的实例。与现有方法相比,这大大降低了内核学习的复杂性,而我们基于大余量的公式为泛化误差提供了严格的上限。我们凭经验证明,新的内核学习方法可以保持或提高现有分类算法的准确性,同时可以显着减少在监督和半监督环境下许多实际数据集上的学习时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号