...
首页> 外文期刊>Eurasip Journal on Wireless Communications and Networking >Few-shot relation classification by context attention-based prototypical networks with BERT
【24h】

Few-shot relation classification by context attention-based prototypical networks with BERT

机译:基于语境注意力的原型网络与BERT的几次拍摄关系分类

获取原文
           

摘要

Human-computer interaction under the cloud computing platform is very important, but the semantic gap will limit the performance of interaction. It is necessary to understand the semantic information in various scenarios. Relation classification (RC) is an import method to implement the description of semantic formalization. It aims at classifying a relation between two specified entities in a sentence. Existing RC models typically rely on supervised learning and distant supervision. Supervised learning requires large-scale supervised training datasets, which are not readily available. Distant supervision introduces noise, and many long-tail relations still suffer from data sparsity. Few-shot learning, which is widely used in image classification, is an effective method for overcoming data sparsity. In this paper, we apply few-shot learning to a relation classification task. However, not all instances contribute equally to the relation prototype in a text-based few-shot learning scenario, which can cause the prototype deviation problem. To address this problem, we propose context attention-based prototypical networks. We design context attention to highlight the crucial instances in the support set to generate a satisfactory prototype. Besides, we also explore the application of a recently popular pre-trained language model to few-shot relation classification tasks. The experimental results demonstrate that our model outperforms the state-of-the-art models and converges faster.
机译:云计算平台下的人机互动非常重要,但语义差距将限制交互的性能。有必要了解各种场景中的语义信息。关系分类(RC)是实现语义形式化描述的导入方法。它旨在分类句子中两个指定实体之间的关系。现有的RC模型通常依赖于监督学习和遥远的监督。监督学习需要大规模的监督培训数据集,这些数据集不易获得。遥远的监督引入噪音,许多长尾关系仍然遭受数据稀疏性。广泛用于图像分类的几次学习是克服数据稀疏性的有效方法。在本文中,我们将几秒钟的学习应用于关系分类任务。但是,并非所有实例都同样为基于文本的少量学习场景中的关系原型贡献,这可能导致原型偏差问题。为了解决这个问题,我们提出了基于注意的注意力的原型网络。我们设计上下文注意,突出显示支持集中的关键实例,以生成令人满意的原型。此外,我们还探讨了最近流行的预先训练的语言模型应用于几次拍摄关系分类任务。实验结果表明,我们的模型优于最先进的模型并更快地收敛。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号