首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Self-Attention Architectures for Answer-Agnostic Neural Question Generation
【24h】

Self-Attention Architectures for Answer-Agnostic Neural Question Generation

机译:用于答复神经问题的自我关注架构

获取原文

摘要

Neural architectures based on self-attention, such as Transformers, recently attracted interest from the research community, and obtained significant improvements over the state of the art in several tasks. We explore how Transformers can be adapted to the task of Neural Question Generation without constraining the model to focus on a specific answer passage. We study the effect of several strategies to deal with out-of-vocabulary words such as copy mechanisms, placeholders, and contextual word embeddings. We report improvements obtained over the state-of-the-art on the SQuAD dataset according to automated metrics (BLEU, ROUGE), as well as qualitative human assessments of the system outputs.
机译:基于自我关注的神经架构,如变形金刚,最近引起了研究界的兴趣,并在几个任务中获得了对现有技术的兴趣。我们探讨了变形金刚如何适应神经问题的任务,而无需约束模型将专注于特定的答案通道。我们研究了若干策略对词汇的效果,如复制机制,占位符和上下文嵌入。根据自动指标(Bleu,Rouge)以及系统输出的定性人体评估,我们报告了在Squad DataSet上获得的改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号