...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Online Parameter-Free Learning of Multiple Low Variance Tasks
【24h】

Online Parameter-Free Learning of Multiple Low Variance Tasks

机译:在线参数 - 无多个低方差任务的学习

获取原文
           

摘要

We propose a method to learn a common bias vector for a growing sequence of low-variance tasks. Unlike state-of-the-art approaches, our method does not require tuning any hyper-parameter. Our approach is presented in the non-statistical setting and can be of two variants. The “aggressive” one updates the bias after each datapoint, the “lazy” one updates the bias only at the end of each task. We derive an across-tasks regret bound for the method. When compared to state-of-the-art approaches, the aggressive variant returns faster rates, the lazy one recovers standard rates, but with no need of tuning hyper-parameters. We then adapt the methods to the statistical setting: the aggressive variant becomes a multi-task learning method, the lazy one a meta-learning method. Experiments confirm the effectiveness of our methods in practice.
机译:我们提出了一种用于学习用于越来越高的低方差任务序列的共同偏置载体的方法。与最先进的方法不同,我们的方法不需要调整任何超参数。我们的方法在非统计环境中呈现,可以是两个变体。 “激进”一个更新每个DataPoint后的偏差,“懒惰”只在每个任务结束时更新偏差。我们派生了跨任务的遗憾绑定了该方法。与最先进的方法相比,侵略性变体返回更快的速率,懒惰的速度恢复标准率,但无需调整超参数。然后,我们将方法调整到统计设置:激进的变体成为多任务学习方法,懒惰的一个元学习方法。实验证实了我们在实践中的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号