【24h】

Exploiting Multitask Learning Schemes Using Private Subnetworks

机译:利用专用子网开发多任务学习方案

获取原文
获取原文并翻译 | 示例

摘要

Many problems in pattern recognition are focused to learn one main task, Single Task Learning (STL). However, most of them can be formulated from learning several tasks related to the main task at the same time while using a shared representation, Multitask Learning (MTL). In this paper, a new MLT architecture is proposed and its performance is compared with those obtained from other previous schemes used in MTL. This new MTL scheme makes use of private subnetworks to induce a bias in the learning process. The results provided from artificial and real data sets show how the use of this private subnetworks in MTL produces a better generalization capabilities and a faster learning.
机译:模式识别中的许多问题都集中于学习一项主要任务,即单任务学习(STL)。但是,其中大多数可以通过使用共享表示的多任务学习(MTL)来同时学习与主要任务相关的多个任务来制定。本文提出了一种新的MLT体系结构,并将其性能与从MTL中使用的其他先前方案获得的性能进行了比较。这种新的MTL方案利用专用子网在学习过程中引起偏差。人工和真实数据集提供的结果表明,在MTL中使用此私有子网如何产生更好的泛化能力和更快的学习速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号