首页> 外文会议>Conference on Neural Information Processing Systems >Learning to Self-Train for Semi-Supervised Few-Shot Classification
【24h】

Learning to Self-Train for Semi-Supervised Few-Shot Classification

机译:学习自由列车,用于半监督的少量分类

获取原文

摘要

Few-shot classification (FSC) is challenging due to the scarcity of labeled training data (e.g. only one labeled data point per class). Meta-learning has shown to achieve promising results by learning to initialize a classification model for FSC. In this paper we propose a novel semi-supervised meta-learning method called learning to self-train (LST) that leverages unlabeled data and specifically meta-learns how to cherry-pick and label such unsupervised data to further improve performance. To this end, we train the LST model through a large number of semi-supervised few-shot tasks. On each task, we train a few-shot model to predict pseudo labels for unlabeled data, and then iterate the self-training steps on labeled and pseudo-labeled data with each step followed by fine-tuning. We additionally learn a soft weighting network (SWN) to optimize the self-training weights of pseudo labels so that better ones can contribute more to gradient descent optimization. We evaluate our LST method on two ImageNet benchmarks for semi-supervised few-shot classification and achieve large improvements over the state-of-the-art method. Code is at github.com/xinzheli1217/learning-to-self-train.
机译:由于标记的训练数据的稀缺性,少量分类(FSC)是挑战(例如,每班标记数据点)。元学习已经显示通过学习初始化FSC的分类模型来实现有前途的结果。在本文中,我们提出了一种新的半监督元学习方法,称为学习的自动列车(LST),利用未标记的数据,具体地学习如何培养和标记这些无监督的数据,以进一步提高性能。为此,我们通过大量半监督的几次拍摄任务培训LST模型。在每个任务中,我们训练几次拍摄模型以预测未标记数据的伪标签,然后迭代标记和伪标记数据的自我训练步骤,然后用每个步骤然后进行微调。我们另外学习一个软加权网络(SWN),以优化伪标签的自训练权重,使得更好的培训权重量可以为梯度下降优化做出更多贡献。我们在半监控少量分类的两种想象中心基准上评估我们的LST方法,并通过最先进的方法实现了大的改进。代码是在github.com/xinzheli1217/learning-toOself-train。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号