【24h】

Sentence Embedding Alignment for Lifelong Relation Extraction

机译:句子嵌入终身关系提取的对齐

获取原文

摘要

Conventional approaches to relation extraction usually require a fixed set of pre-defined relations. Such requirement is hard to meet in many real applications, especially when new data and relations are emerging incessantly and it is computationally expensive to store all data and re-train the whole model every time new data and relations come in. We formulate such a challenging problem as lifelong relation extraction and investigate memory-efficient incremental learning methods without catastrophically forgetting knowledge learned from previous tasks. We first investigate a modified version of the stochastic gradient methods with a replay memory, which surprisingly outperforms recent state-of-the-art lifelong learning methods. We further propose to improve this approach to alleviate the forgetting problem by anchoring the sentence embedding space. Specifically, we utilize an explicit alignment model to mitigate the sentence embedding distortion of the learned model when training on new data and new relations. Experiment results on multiple benchmarks show that our proposed method significantly outperforms the state-of-the-art lifelong learning approaches.
机译:与关系提取的常规方法通常需要固定的预定义关系。这些要求很难在许多真正的应用中满足,特别是当新的数据和关系不断出现时,每次新数据和关系都进入时,将所有数据和重新训练整个模型都会销售昂贵。我们制定了如此挑战问题作为终身关系提取和调查记忆有效的增量学习方法,而不造成灾难性地忘记从以前任务中学到的知识。我们首先要调查具有重播内存的随机梯度方法的修改版本,令人惊讶地优于最近的最先进的终身学习方法。我们进一步建议通过锚定围困空间来改善这种方法来缓解遗忘问题。具体而言,我们利用显式对准模型来减轻在新数据和新关系上培训时嵌入学习模型失真的句子。多个基准测试结果表明,我们所提出的方法显着优于最先进的终身学习方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号