首页> 外文会议>Conference of the European Chapter of the Association for Computational Linguistics >GRIT: Generative Role-filler Transformers for Document-level Event Entity Extraction
【24h】

GRIT: Generative Role-filler Transformers for Document-level Event Entity Extraction

机译:砂砾:用于文件级事件实体提取的生成式角色填充变压器

获取原文

摘要

We revisit the classic problem of document-level role-filler entity extraction (REE) for template filling. We argue that sentence-level approaches are ill-suited to the task and introduce a generative transformer-based encoder-decoder framework (GRIT) that is designed to model context at the document level: it can make extraction decisions across sentence boundaries; is implicitly aware of noun phrase coreference structure, and has the capacity to respect cross-role dependencies in the template structure. We evaluate our approach on the MUC-4 dataset, and show that our model performs substantially better than prior work. We also show that our modeling choices contribute to model performance, e.g., by implicitly capturing linguistic knowledge such as recognizing coreferent entity mentions.
机译:我们重新审视文档级角色填充实体提取(REE)的经典问题进行模板填充。 我们认为句子级方法不适合任务,并引入一个基于生成的变换器的编码器 - 解码器框架(GRIT),该框架(GRIT)旨在在文档级别模拟上下文:它可以在句子边界上进行提取决策; 隐含地了解名词短语coreference结构,并且具有尊重模板结构中的交叉角色依赖性的容量。 我们在MUC-4数据集中评估我们的方法,并表明我们的模型显得比现有工作更好。 我们还表明,我们的建模选择有助于模型性能,例如,通过隐式捕获语言知识,例如识别Coreferent实体提到。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号