首页> 外文会议>Annual meeting of the Association for Computational Linguistics >End-to-end Deep Reinforcement Learning Based Coreference Resolution
【24h】

End-to-end Deep Reinforcement Learning Based Coreference Resolution

机译:基于端到端的深度加强学习的COSEREED分辨率

获取原文

摘要

Recent neural network models have significantly advanced the task of coreference resolution. However, current neural coreference models are typically trained with heuristic loss functions that are computed over a sequence of local decisions. In this paper, we introduce an end-to-end reinforcement learning based coreference resolution model to directly optimize coreference evaluation metrics. Specifically, we modify the state-of-the-art higher-order mention ranking approach in Lee et al. (2018) to a reinforced policy gradient model by incorporating the reward associated with a sequence of coreference linking actions. Furthermore, we introduce maximum entropy reg-ularization for adequate exploration to prevent the model from prematurely converging to a bad local optimum. Our proposed model achieves new state-of-the-art performance on the English OntoNotes v5.0 benchmark.
机译:最近的神经网络模型显着提出了Coreference解决方案的任务。然而,目前的神经芯推导模型通常具有由一系列本地决策计算的启发式损失函数培训。在本文中,我们介绍了基于端到端的加强学习的Coreference分辨率模型,直接优化Coreference评估度量。具体而言,我们修改了李等人的最先进的高阶提及排名方法。 (2018)通过纳入与一系列COSTEREDS联系动作的奖励来加强政策梯度模型。此外,我们介绍了最大的熵仪,以便充分探索,以防止模型过早地融合到众多局部最佳最优。我们拟议的模型在英语Ontonotes V5.0基准上实现了新的最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号