Recently,addressing the few-shot learning issue with meta-learning framework achieves great success.As we know,regularization is a powerful technique and widely used to improve machine learning algorithms.However,rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning.In this paper,we propose a novel metacontrastive loss that can be regarded as a regularization to fill this gap.The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution,and could lead to various bias representations of the whole data because of the different sampling parts.Thus,the models trained by a few training data(support set)and test data(query set)might misalign in the model space,making the model learned on the support set can not generalize well on the query data.The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem.The performance of the meta-learning model in few-shot learning can be improved.Extensive experiments demonstrate that our method can improve the performance of different gradientbased meta-learning models in various learning problems,e.g.,few-shot regression and classification.
展开▼
机译:Study Findings on Information Technology Discussed by Researchers at Southeast University (Improved Meta-learning Neural Network for the Prediction of the Historical Reinforced Concrete Bond-Slip Model Using Few Test Specimens)