首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >SVRG-MKL: A Fast and Scalable Multiple Kernel Learning Solution for Features Combination in Multi-Class Classification Problems
【24h】

SVRG-MKL: A Fast and Scalable Multiple Kernel Learning Solution for Features Combination in Multi-Class Classification Problems

机译:SVRG-MKL:用于多级分类问题中的功能组合的快速和可扩展的多个内核学习解决方案

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we present a novel strategy to combine a set of compact descriptors to leverage an associated recognition task. We formulate the problem from a multiple kernel learning (MKL) perspective and solve it following a stochastic variance reduced gradient (SVRG) approach to address its scalability, currently an open issue. MKL models are ideal candidates to jointly learn the optimal combination of features along with its associated predictor. However, they are unable to scale beyond a dozen thousand of samples due to high computational and memory requirements, which severely limits their applicability. We propose SVRG-MKL, an MKL solution with inherent scalability properties that can optimally combine multiple descriptors involving millions of samples. Our solution takes place directly in the primal to avoid Gram matrices computation and memory allocation, whereas the optimization is performed with a proposed algorithm of linear complexity and hence computationally efficient. Our proposition builds upon recent progress in SVRG with the distinction that each kernel is treated differently during optimization, which results in a faster convergence than applying off-the-shelf SVRG into MKL. Extensive experimental validation conducted on several benchmarking data sets confirms a higher accuracy and a significant speedup of our solution. Our technique can be extended to other MKL problems, including visual search and transfer learning, as well as other formulations, such as group-sensitive (GMKL) and localized MKL (LMKL) in convex settings.
机译:在本文中,我们提出了一种组合一组紧凑描述符来利用相关识别任务的新策略。我们从多个内核学习(MKL)的角度来构成问题,并通过随机方差减少梯度(SVRG)方法来解决其可扩展性,目前是一个开放问题。 MKL Models是共同学习最佳特征的理想候选者以及其相关的预测因素。然而,由于高计算和内存要求,它们无法扩展超过十几千样品,这严重限制了它们的适用性。我们提出了SVRG-MKL,MKL解决方案具有固有的可伸缩性属性,可以最佳地组合涉及数百万个样本的多个描述符。我们的解决方案直接在原始中进行以避免克矩阵计算和内存分配,而通过建议的线性复杂度算法执行优化,从而进行计算效率。我们的命题在SVRG最近的进展情况下,区分各种内核在优化期间对每个内核进行了不同的处理,这导致比将搁板的SVRG施加到MKL中的更快的收敛。在多个基准数据集上进行的广泛实验验证确认了更高的准确性和我们解决方案的大量加速。我们的技术可以扩展到其他MKL问题,包括视觉搜索和传输学习,以及其他配方,例如凸面设置的组敏感(GMK1)和本地化的MKL(LMKL)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号