首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Risk Bounds for Transferring Representations With and Without Fine-Tuning
【24h】

Risk Bounds for Transferring Representations With and Without Fine-Tuning

机译:在有和没有微调的情况下转移制图表达的风险界限

获取原文
           

摘要

A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task. Examples include the re-use of neural network weights or word embeddings. We develop sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight transfer, which we validate with experiments.
机译:流行的机器学习策略是将在源任务上学习的表示形式(即特征提取功能)转移到目标任务上。示例包括重复使用神经网络权重或单词嵌入。我们为这种方法的成功开发了充分的条件。如果从源任务中学到的表示形式是固定的,我们将通过基于VC维的自变量确定任务如何相关以获取目标任务风险上限的条件。然后,我们考虑使用源任务中的表示来构造先验,然后使用目标任务数据对其进行微调。在适当的条件下,我们将在这种情况下给出PAC-Bayes目标任务风险。我们使用前馈神经网络显示边界示例。我们的结果激发了一种可行的重量转移方法,并通过实验进行了验证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号