...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
【24h】

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

机译:与模型无关的元学习,可快速适应深度网络

获取原文
           

摘要

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
机译:我们提出一种与模型无关的元学习算法,从某种意义上说,它与任何采用梯度下降训练的模型兼容,并且适用于各种不同的学习问题,包括分类,回归和强化学习。元学习的目标是针对各种学习任务训练模型,以便仅使用少量训练样本即可解决新的学习任务。在我们的方法中,对模型的参数进行了显式训练,以使少量梯度步骤和来自新任务的少量训练数据将对该任务产生良好的泛化性能。实际上,我们的方法训练模型易于微调。我们证明了这种方法在两个镜头图像分类基准上产生了最先进的性能,在镜头快照回归上产生了良好的结果,并加快了使用神经网络策略进行策略梯度强化学习的微调。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号