首页> 外文期刊>Multimedia Tools and Applications >Multi-scale feature self-enhancement network for few-shot learning
【24h】

Multi-scale feature self-enhancement network for few-shot learning

机译:用于几次射击学习的多尺度特征自增强网络

获取原文
获取原文并翻译 | 示例
       

摘要

The goal of few-shot learning(FSL) is to learn from a hand of labeled examples and quickly adapt to a new task. The traditional FSL models use the single-scale feature that does not have strong representative ability. Besides, some previous methods construct graph neural network to get better classifications, while they update nodes indiscriminately, which will result in intra-class information passing between inter-class nodes. In this paper, we propose a new method called Multi-scale Feature Self-enhancement Network(MFSN) for few-shot learning, which extracts multi-scale feature through a novel extractor, and then enhance the multiple features by the selective graph neural networks that can filter out the incorrect passings between nodes through a meta-learner. At last, classification is performed by measuring distances between the augmented unlabeled features and the improved prototypes computed from augmented labeled features. Comparing to the traditional method, our method improves 1-shot accuracy by 11.8% and improves 5-shot by 10.3% on MiniImagenet dataset. Experiments on MiniImagenet, Cifar-100, and Caltech-256 datasets show the effectiveness of the proposed model.
机译:几次拍摄学习(FSL)的目标是从标记示例的一只手中学习并快速适应新任务。传统的FSL模型使用没有强大代表性能力的单级功能。此外,一些先前的方法构造了图形神经网络以获得更好的分类,同时难以使用的节点更新,这将导致在类间节点之间传递的类内信息。在本文中,我们提出了一种称为多尺度特征自增强网络(MFSN)的新方法,用于几次拍摄学习,该方法通过新颖的提取器提取多尺度特征,然后通过选择图神经网络增强多个特征这可以通过元学习者滤除节点之间的不正确传递。最后,通过测量增强的未标记特征和从增强标记的特征计算的改进原型来进行分类来执行分类。与传统方法相比,我们的方法通过1​​1.8%提高了1次射击精度,并在MiniimAgenet数据集中提高了5.3%的5.3%。 MiniimAgenet,CiFar-100和CALTECH-256数据集的实验表明了所提出的模型的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号