首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory
【24h】

Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory

机译:基于扩展PAC-贝叶斯理论的调整先验元学习

获取原文
       

摘要

In meta-learning an agent extracts knowledge from observed tasks, aiming to facilitate learning of novel future tasks. Under the assumption that future tasks are ‘related’ to previous tasks, accumulated knowledge should be learned in such a way that they capture the common structure across learned tasks, while allowing the learner sufficient flexibility to adapt to novel aspects of a new task. We present a framework for meta-learning that is based on generalization error bounds, allowing us to extend various PAC-Bayes bounds to meta-learning. Learning takes place through the construction of a distribution over hypotheses based on the observed tasks, and its utilization for learning a new task. Thus, prior knowledge is incorporated through setting an experience-dependent prior for novel tasks. We develop a gradient-based algorithm, and implement it for deep neural networks, based on minimizing an objective function derived from the bounds, and demonstrate its effectiveness numerically. In addition to establishing the improved performance available through meta-learning, we demonstrate the intuitive way by which prior information is manifested at different levels of the network.
机译:在元学习中,代理从观察到的任务中提取知识,旨在促进对新的未来任务的学习。在假设将来的任务与先前的任务“相关”的前提下,应以这样一种方式来学习累积的知识,即它们捕获了跨学习任务的通用结构,同时使学习者有足够的灵活性来适应新任务的新颖方面。我们提出了一个基于泛化误差范围的元学习框架,从而使我们能够将各种PAC-Bayes范围扩展到元学习。通过基于观察到的任务在假设上的分布的构造及其学习新任务的利用来进行学习。因此,通过为新任务设置与经验相关的先验,可以合并先验知识。我们开发了一种基于梯度的算法,并在最小化从边界派生的目标函数的基础上,将其用于深度神经网络,并通过数值证明了其有效性。除了通过元学习建立改进的性能之外,我们还演示了直观的方法,通过该方法可以在网络的不同级别上显示先验信息。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号