...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Explicit Inductive Bias for Transfer Learning with Convolutional Networks
【24h】

Explicit Inductive Bias for Transfer Learning with Convolutional Networks

机译:卷积网络转移学习的显式归纳偏置

获取原文
           

摘要

In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We show the benefit of having an explicit inductive bias towards the initial model, and we eventually recommend a simple $L^2$ penalty with the pre-trained model being a reference as the baseline of penalty for transfer learning tasks.
机译:在归纳转移学习中,对预训练的卷积网络进行微调会大大胜过从头开始的训练。使用微调时,基本假设是预先训练的模型会提取通用特征,这些特征至少部分地与解决目标任务相关,但很难从目标任务上可用的有限数据中提取。但是,除了使用预先训练的模型进行初始化和提前停止外,没有任何微调机制可以保留在源任务中学习的功能。在本文中,我们研究了几种可明显促进最终解决方案与初始模型相似性的正则化方案。我们展示了对初始模型具有明显归纳偏差的好处,并且我们最终推荐了一个简单的$ L ^ 2 $惩罚,其中预先训练的模型可以作为转移学习任务的惩罚基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号