首页> 外文会议>Workshop on Simple and Efficient Natural Language Processing >Quasi-Multitask Learning: an Efficient Surrogate for Constructing Model Ensembles
【24h】

Quasi-Multitask Learning: an Efficient Surrogate for Constructing Model Ensembles

机译:准多任务学习:用于构建模型合奏的高效代理

获取原文

摘要

We propose the technique of quasi-multitask learning (Q-MTL), a simple and easy to implement modification of standard multitask learning, in which the tasks to be modeled are identical. With this easy modification of a standard neural classifier we can get benefits similar to an ensemble of classifiers with a fraction of the resources required. We illustrate it through a series of sequence labeling experiments over a diverse set of languages, that applying Q-MTL consistently increases the generalization ability of the applied models. The proposed architecture can be regarded as a new regulariza-tion technique that encourages the model to develop an internal representation of the problem at hand which is beneficial to multiple output units of the classifier at the same time. Our experiments corroborate that by relying on the proposed algorithm, we can approximate the quality of an ensemble of classifiers at a fraction of computational resources required. Additionally, our results suggest that Q-MTL handles the presence of noisy training labels better than ensembles.
机译:我们提出了准多址学习(Q-MTL)的技术,简单易于实现了标准多任务学习的修改,其中要建模的任务是相同的。随着这种简单修改标准神经分类器,我们可以获得类似于分类器的集成的益处,该分类器具有一小部分所需的资源。我们通过多种语言进行一系列序列标记实验来说明它,施加Q-MTL始终增加所应用模型的泛化能力。所提出的架构可以被视为一种新的规范化技术,鼓励模型在手头上开发问题的内部表示,这同时对分类器的多个输出单元有益。我们的实验证实,通过依赖于所提出的算法,我们可以在所需的计算资源的一小部分上近似分类器的集合的质量。此外,我们的结果表明,Q-MTL比合奏更好地处理嘈杂的训练标签的存在。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号