首页> 外文会议>Conference on the North American Chapter of the Association for Computational Linguistics: Human Language Technologies >Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data
【24h】

Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data

机译:通过使用未标记的数据预训练复制增强的体系结构来改善语法错误纠正

获取原文

摘要

Neural machine translation systems have become state-of-the-art approaches for Grammatical Error Correction (GEC) task. In this paper, we propose a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence. Since the GEC suffers from not having enough labeled training data to achieve high accuracy. We pre-train the copy-augmented architecture with a denoising auto-encoder using the unlabeled One Billion Benchmark and make comparisons between the fully pre-trained model and a partially pre-trained model. It is the first time copying words from the source context and fully pre-training a sequence to sequence model are experimented on the GEC task. Moreover, We add token-level and sentence-level multi-task learning for the GEC task. The evaluation results on the CoNLL-2014 test set show that our approach outperforms all recently published state-of-the-art results by a large margin.
机译:神经机器翻译系统已成为语法错误纠正(GEC)任务的最新方法。在本文中,我们通过将未更改的单词从源句子复制到目标句子,为GEC任务提出了一种复制增强的体系结构。由于GEC缺乏足够的带标签训练数据来实现高精度。我们使用未标记的“十亿基准”使用降噪自动编码器对复制增强的架构进行了预训练,并在完全预训练的模型与部分预训练的模型之间进行了比较。这是第一次从源上下文中复制单词,并在GEC任务上进行了完全预训练序列到序列模型的实验。此外,我们为GEC任务添加了令牌级和句子级多任务学习。 CoNLL-2014测试集的评估结果表明,我们的方法大大优于最近发布的所有最新结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号