首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >When does deep multi-task learning work for loosely related document classification tasks?
【24h】

When does deep multi-task learning work for loosely related document classification tasks?

机译:深度多任务学习什么时候可用于松散相关的文档分类任务?

获取原文
获取原文并翻译 | 示例

摘要

This work aims to contribute to our understanding of when multi-task learning through parameter sharing in deep neural networks leads to improvements over single-task learning. We focus on the setting of learning from loosely related tasks, for which no theoretical guarantees exist. We therefore approach the question empirically, studying which properties of datasets and single-task learning characteristics correlate with improvements from multi-task learning. We arc the first to study this in a text classification setting and across more than 500 different task pairs.
机译:这项工作旨在帮助我们理解何时通过深层神经网络中的参数共享进行多任务学习导致对单任务学习的改进。我们专注于从松散相关的任务中学习的设置,而这些任务没有理论上的保证。因此,我们以经验的方式解决该问题,研究哪些数据集属性和单任务学习特征与多任务学习的改进相关。我们是第一个在文本分类设置以及500多个不同任务对中进行研究的人。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号