...
首页> 外文期刊>Expert systems with applications >Frog-GNN: Multi-perspective aggregation based graph neural network for few-shot text classification
【24h】

Frog-GNN: Multi-perspective aggregation based graph neural network for few-shot text classification

机译:FROG-GNN:基于多透视聚合的少数拍摄文本分类图神经网络

获取原文
获取原文并翻译 | 示例

摘要

Few-shot text classification aims to learn a classifier from very few labeled text instances per class. Previous fewshot research works in NLP are mainly based on Prototypical Networks, which encode support set samples of each class to prototype representations and compute distance between query and each class prototype. In the prototype aggregation progress, much useful information of support set and discrepancy between samples from different class are ignored. In contrast, our model focuses on all query-support pairs without information loss. In this paper, we propose a multi-perspective aggregation based graph neural network (Frog-GNN) that observes through eyes (support and query instance) and speaks by mouth (pair) for few-shot text classification. We construct a graph by pre-trained pair representations and aggregate information from neighborhoods by instance-level representations for message-passing. The final relational features of pairs imply intra-class similarity and inter-class dissimilarity after iteratively interactions among instances. In addition, our Frog-GNN with meta-learning strategy can well generalize to unseen class. Experimental results demonstrate that the proposed GNN model outperforms existing few-shot approaches in both few-shot text classification and relation classification on three benchmark datasets.
机译:少量拍摄的文本分类旨在从每个类标记的文本实例中学习一个分类器。以前的几秒钟,NLP中的工作主要基于原型网络,它们将每个类的支持设置为原型表示和查询与每个类原型之间的计算距离。在原型聚合进度中,忽略来自不同类别的样本之间的支持集和来自不同类别之间的差异的许多有用信息。相比之下,我们的模型侧重于所有没有信息丢失的查询支持对。在本文中,我们提出了一种基于多透视聚合的基于图形神经网络(FROG-GNN),其通过眼睛(支持和查询实例)观察,并通过口(对)讲话,用于几次拍摄的文本分类。我们通过预先训练的对表示来构造一个图表,并通过邻居级别表示消息传递的邻居信息。在实例之间迭代相互作用后,对的最终关系特征意味着阶级的帧内相似性和阶级间不同的不同性。此外,我们的青蛙GNN与Meta学习策略可以概括到看不见的课程。实验结果表明,在三个基准数据集中,所提出的GNN模型在几次拍摄文本分类和关系分类中表现出现有的几种方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号