首页> 外文期刊>Knowledge-Based Systems >Safe sample screening for regularized multi-task learning
【24h】

Safe sample screening for regularized multi-task learning

机译:正规化多任务学习的安全样本筛选

获取原文
获取原文并翻译 | 示例

摘要

As a machine learning paradigm, multi-task learning (MTL) attracts increasing attention recently. It can improve the overall performance by exploiting the correlation among different tasks. It is especially helpful in dealing with small sample learning problems. As a classic multi-task learner, regularized multi-task learning (RMTL) inspired lots of multi-task learning researches in the past. Massive researches have shown the performance of RMTL when compared to single-task learners, i.e., support vector machine. However, the training complexity will be considerably large when training large datasets. To tackle such a problem, we propose safe screening rules for an improved regularized multi-task support vector machine (IRMTL). By statically detecting and removing inactive samples from multiple tasks simultaneously before solving the reduced optimization problem, both rules reduce the training time significantly without incurring performance degradation of the proposed method. The experimental results on 13 benchmark datasets and an image dataset also clearly demonstrate the effectiveness of safe screening rules for IRMTL. (C) 2020 Elsevier B.V. All rights reserved.
机译:作为一种机器学习范式,多任务学习(MTL)最近吸引了不断的关注。它可以通过利用不同任务之间的相关性来提高整体性能。在处理小型样本问题时特别有助于。作为经典的多任务学习者,正规化的多任务学习(RMTL)灵感了许多过去的多任务学习研究。与单任务学习者相比,大规模研究表明RMTL,即支持向量机。但是,在培训大型数据集时,训练复杂性将会大大大。为了解决此类问题,我们为改进的正则化多任务支持向量机(IRMTL)提出了安全的筛选规则。通过在解决降低的优化问题之前通过静态检测和移除来自多个任务的非活动样本,两个规则显着降低了训练时间,而不会产生所提出的方法的性能下降。 13个基准数据集和图像数据集的实验结果也清楚地证明了IRMTL安全筛选规则的有效性。 (c)2020 Elsevier B.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号