首页> 外文会议>Annual Meeting of the Association for Computational Linguistics;International Joint Conference on natural Language Processing >Refining Sample Embeddings with Relation Prototypes to Enhance Continual Relation Extraction
【24h】

Refining Sample Embeddings with Relation Prototypes to Enhance Continual Relation Extraction

机译:用关系原型炼制样本嵌入,以增强持续关系提取

获取原文

摘要

Continual learning has gained increasing attention in recent years, thanks to its biological interpretation and efficiency in many real-world applications. As a typical task of continual learning, continual relation extraction (CRE) aims to extract relations between entities from texts, where the samples of different relations are delivered into the model continuously. Some previous works have proved that storing typical samples of old relations in memory can help the model keep a stable understanding of old relations and avoid forgetting them. However, most methods heavily depend on the memory size in that they simply replay these memorized samples in subsequent tasks. To fully utilize memorized samples, in this paper, we employ relation prototype to extract useful information of each relation. Specifically, the prototype embedding for a specific relation is computed based on memorized samples of this relation, which is collected by K-means algorithm. The prototypes of all observed relations at current learning stage are used to re-initialize a memory network to refine subsequent sample embeddings, which ensures the model's stable understanding on all observed relations when learning a new task. Compared with previous CRE models, our model utilizes the memory information sufficiently and efficiently, resulting in enhanced CRE performance. Our experiments show that the proposed model outperforms the state-of-the-art CRE models and has great advantage in avoiding catastrophic forgetting.
机译:近年来,由于许多现实世界应用的生物解释和效率,持续学习越来越受到关注。作为持续学习的典型任务,持续关系提取(CRE)旨在提取文本的实体之间的关系,其中不同关系的样本连续地将其交付给模型。一些以前的作品证明,在记忆中储存旧关系的典型样本可以帮助模型保持稳定地了解旧关系,避免忘记它们。但是,大多数方法大量取决于内存大小,因为它们只是在后续任务中重播这些记忆样本。为了充分利用记忆样本,本文采用了关系原型来提取每个关系的有用信息。具体地,基于该关系的记忆样本来计算用于特定关系的原型嵌入,其由K-Means算法收集。当前学习阶段的所有观察到的关系的原型用于重新初始化内存网络以改进随后的样本嵌入,这确保了在学习新任务时对所有观察关系的稳定了解。与以前的CRE模型相比,我们的模型充分有效地利用了存储器信息,从而提高了CRE性能。我们的实验表明,该模型优于最先进的CRE模型,在避免灾难性遗忘方面具有很大的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号