首页> 外文会议>Pacific-Asia Conference on Knowledge Discovery and Data Mining >Best from Top k Versus Top 1: Improving Distant Supervision Relation Extraction with Deep Reinforcement Learning
【24h】

Best from Top k Versus Top 1: Improving Distant Supervision Relation Extraction with Deep Reinforcement Learning

机译:从Top K Versus最佳排名前1:改善遥远的监督关系提取,深增强学习

获取原文

摘要

Distant supervision relation extraction is a promising approach to find new relation instances from large text corpora. Most previous works employ the top 1 strategy, i.e., predicting the relation of a sentence with the highest confidence score, which is not always the optimal solution. To improve distant supervision relation extraction, this work applies the best from top k strategy to explore the possibility of relations with lower confidence scores. We approach the best from top k strategy using a deep reinforcement learning framework, where the model learns to select the optimal relation among the top k candidates for better predictions. Specifically, we employ a deep Q-network, trained to optimize a reward function that reflects the extraction performance under distant supervision. The experiments on three public datasets -of news articles, Wikipedia and biomedical papers - demonstrate that the proposed strategy improves the performance of traditional state-of-the-art relation extractors significantly. We achieve an improvement of 5.13% in average F_1 -score over four competitive baselines.
机译:遥远的监督关系提取是从大型文本语料库中找到新的关系实例的有希望的方法。最先前的作品采用前1名策略,即,预测具有最高置信度分数的句子的关系,这并不总是最佳的解决方案。为了改善遥远的监督关系提取,这项工作适用于顶级K战略的最佳策略,探讨与较低置信区的关系的可能性。我们使用深度加强学习框架从顶部K策略中接近最佳策略,其中模型学会选择顶部K候选者之间的最佳关系以获得更好的预测。具体而言,我们采用深度Q-Network,培训,以优化奖励功能,反映了遥远监督下的提取性能。关于三个公共数据集 - 新闻文章,维基百科和生物医学论文的实验 - 表明拟议的策略提高了传统最先进的关系提取器的表现。我们在四个竞争性基线中平均达到5.13%的提高5.13%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号