首页> 外文会议>Conference on empirical methods in natural language processing >Multi-Level Structured Self-Attentions for Distantly Supervised Relation Extraction
【24h】

Multi-Level Structured Self-Attentions for Distantly Supervised Relation Extraction

机译:用于远方监督关系提取的多级结构性自我关注

获取原文

摘要

Attention mechanisms are often used in deep neural networks for distantly supervised relation extraction (DS-RE) to distinguish valid from noisy instances. However, traditional 1-D vector attention models are insufficient for the learning of different contexts in the selection of valid instances to predict the relationship for an entity pair. To alleviate this issue, we propose a novel multi-level structured (2-D matrix) self-attention mechanism for DS-RE in a multi-instance learning (MIL) framework using bidirectional recurrent neural networks. In the proposed method, a structured word-level self-attention mechanism learns a 2-D matrix where each row vector represents a weight distribution for different aspects of an instance regarding two entities. Targeting the MIL issue, the structured sentence-level attention learns a 2-D matrix where each row vector represents a weight distribution on selection of different valid instances. Experiments conducted on two publicly available DS-RE datasets show that the proposed framework with a multi-level structured self-attention mechanism significantly outperform state-of-the-art baselines in terms of PR curves, P@N and F1 measures.
机译:注意机制通常用于深度神经网络,用于远处监督的关系提取(DS-RE),以区分噪声实例有效。然而,传统的1-D载体注意力模型不足以在选择有效的情况下学习不同的背景,以预测实体对的关系。为了减轻这个问题,我们提出了一种新的多级结构化(二维矩阵)自我关注机制,用于使用双向经常性神经网络的多实例学习(MIL)框架中的DS-RE。在所提出的方法中,结构化的单词自我关注机制学习了一个二进制矩阵,其中每个行向量表示关于两个实体的实例的不同方面的权重分布。针对MIL问题,结构化的句子级别注意到了一个二维矩阵,其中每个行向量表示选择不同有效实例的权重分布。在两个公共可用DS-RE数据集上进行的实验表明,在PR曲线,P @ N和F1措施方面,具有多级结构性自我关注机制的提出框架明显优于最先进的基线。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号