首页> 外文会议>2nd workshop on representation learing for NLP 2017 >Transfer Learning for Neural Semantic Parsing
【24h】

Transfer Learning for Neural Semantic Parsing

机译:神经语义解析的转移学习

获取原文
获取原文并翻译 | 示例

摘要

The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL). One of the constraints that limits full exploration of deep learning technologies for semantic parsing is the lack of sufficient annotation training data. In this paper, we propose using sequence-to-sequence in a multi-task setup for semantic parsing with a focus on transfer learning. We explore three multi-task architectures for sequence-to-sequence modeling and compare their performance with an independently trained model. Our experiments show that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to a target task with smaller labeled data. We see absolute accuracy gains ranging from 1.0% to 4.4% in our in-house data set, and we also see good gains ranging from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and semantic auxiliary tasks.
机译:语义解析的目标是将自然语言映射到机器可解释的意思表示语言(MRL)。限制对语义解析进行深度学习技术的全面探索的约束之一是缺少足够的注释训练数据。在本文中,我们建议在多任务设置中使用序列到序列进行语义解析,重点是转移学习。我们探索了用于序列到序列建模的三种多任务架构,并将其性能与经过独立训练的模型进行比较。我们的实验表明,多任务设置有助于将学习从具有较大标记数据的辅助任务转移到具有较小标记数据的目标任务。我们在内部数据集中看到的绝对精度提高了1.0%到4.4%,并且在带有语法和语义辅助任务的ATIS语义解析任务上也看到了2.5%到7.0%的良好提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号