首页> 外文期刊>Artificial intelligence >Low-rank decomposition meets kernel learning: A generalized Nyström method
【24h】

Low-rank decomposition meets kernel learning: A generalized Nyström method

机译:低秩分解满足内核学习:广义Nyström方法

获取原文
获取原文并翻译 | 示例
       

摘要

Low-rank matrix decomposition and kernel learning are two useful techniques in building advanced learning systems. Low-rank decomposition can greatly reduce the computational cost of manipulating large kernel matrices. However, existing approaches are mostly unsupervised and do not incorporate side information such as class labels, making the decomposition less effective for a specific learning task. On the other hand, kernel learning techniques aim at constructing kernel matrices whose structure is well aligned with the learning target, which improves the generalization performance of kernel methods. However, most kernel learning approaches are computationally very expensive. To obtain the advantages of both techniques and address their limitations, in this paper we propose a novel kernel low-rank decomposition formulation called the generalized Nyström method. Our approach inherits the linear time and space complexity via matrix decomposition, while at the same time fully exploits (partial) label information in computing task-dependent decomposition. In addition, the resultant low-rank factors can generalize to arbitrary new samples, rendering great flexibility in inductive learning scenarios. We further extend the algorithm to a multiple kernel learning setup. The experimental results on semi-supervised classification demonstrate the usefulness of the proposed method.
机译:低阶矩阵分解和内核学习是构建高级学习系统的两种有用技术。低秩分解可以大大降低处理大型内核矩阵的计算成本。但是,现有的方法大多是无监督的,并且不包含诸如类标签之类的辅助信息,这使得分解对于特定的学习任务不太有效。另一方面,内核学习技术旨在构造结构与学习目标完全吻合的内核矩阵,从而提高了内核方法​​的泛化性能。但是,大多数内核学习方法在计算上非常昂贵。为了获得这两种技术的优点并解决它们的局限性,在本文中,我们提出了一种新颖的内核低秩分解公式,称为广义Nyström方法。我们的方法通过矩阵分解继承了线性时间和空间复杂性,同时在计算任务相关分解时充分利用(部分)标签信息。此外,由此产生的低等级因素可以推广到任意新样本,从而在归纳学习场景中提供了极大的灵活性。我们进一步将算法扩展到多核学习设置。半监督分类的实验结果证明了该方法的有效性。

著录项

  • 来源
    《Artificial intelligence》 |2017年第9期|1-15|共15页
  • 作者单位

    Lenovo Group Limited, 100 Cyberport Road, Hong Kong;

    Department of Computer and Information Sciences, Temple University, Philadelphia, PA, United States;

    Department of Computer Science and Engineering, Texas A&M University, Texas, United States;

    NEC Laboratories America, Princeton, United States;

    SAS Institute Inc., Cary, NC, United States;

    Institute of Softwares and Interactive Systems, Technical University of Vienna, Austria;

    Institute for Infocomm Research, Singapore;

    School of Computer Science and Software Engineering, East China Normal University, Shanghai, China;

    School of Computer Science and Software Engineering, East China Normal University, Shanghai, China,College of Computing, Georgia Institute of Technology, Atlanta, GA, United States;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Kernel learning; Large-scale learning algorithms; Multiple kernel learning; Nyström low-rank decomposition;

    机译:内核学习;大规模学习算法;多核学习;Nyström低阶分解;
  • 入库时间 2022-08-18 02:05:13

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号