首页> 中文期刊> 《中国计算机科学前沿:英文版》 >Improving meta-learning model via meta-contrastive loss

Improving meta-learning model via meta-contrastive loss

         

摘要

Recently,addressing the few-shot learning issue with meta-learning framework achieves great success.As we know,regularization is a powerful technique and widely used to improve machine learning algorithms.However,rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning.In this paper,we propose a novel metacontrastive loss that can be regarded as a regularization to fill this gap.The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution,and could lead to various bias representations of the whole data because of the different sampling parts.Thus,the models trained by a few training data(support set)and test data(query set)might misalign in the model space,making the model learned on the support set can not generalize well on the query data.The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem.The performance of the meta-learning model in few-shot learning can be improved.Extensive experiments demonstrate that our method can improve the performance of different gradientbased meta-learning models in various learning problems,e.g.,few-shot regression and classification.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号