【24h】

Multi-task Gaussian Process Prediction

机译:多任务高斯过程预测

获取原文

摘要

In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a "free-form" covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.
机译:在本文中,我们研究了在高斯过程(GP)上下文中的多任务学习。我们提出了一个模型,该模型学习与输入有关的特征上的共享协方差函数以及任务上的“自由格式”协方差矩阵。在对任务间相关性进行建模时,这提供了良好的灵活性,同时避免了需要大量数据进行训练的情况。我们表明,在无噪声观察和块设计的假设下,对给定任务的预测仅取决于其目标值,因此取消了任务间转移。我们在两个实际应用中评估了模型的好处:编译器性能预测问题和考试分数预测任务。此外,我们利用GP近似值和模型属性来为大型数据集提供可伸缩性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号