首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Generalization Bounds of Multitask Learning From Perspective of Vector-Valued Function Learning
【24h】

Generalization Bounds of Multitask Learning From Perspective of Vector-Valued Function Learning

机译:矢量值函数学习视角的多任务学习的泛化界限

获取原文
获取原文并翻译 | 示例

摘要

In this article, we study the generalization performance of multitask learning (MTL) by considering MTL as a learning process of vector-valued functions (VFs). We will answer two theoretical questions, given a small size training sample: 1) under what conditions does MTL perform better than single-task learning (STL)? And 2) under what conditions does MTL guarantee the consistency of all tasks during learning? In contrast to the conventional task-summation based MTL, the introduction of VF form enables us to detect the behavior of each task and the task-group relatedness in MTL. Specifically, the task-group relatedness examines how the success (or failure) of some tasks affects the performance of the other tasks. By deriving the specific deviation and symmetrization inequalities for VFs, we obtain a generalization bound for MTL to the upper bound of the joint probability that there is at least one task with a large generalization gap. To answer the first question, we discuss how the synergic relatedness between task groups affects the generalization performance of MTL and shows that MTL outperforms STL if almost any pair of complementary task groups is predominantly synergic. Moreover, to answer the second question, we present a sufficient condition to guarantee the consistency of each task in MTL, which requires that the function class of each task should not have high complexity. In addition, our findings provide a strategy to examine whether the task settings will enjoy the advantages of MTL.
机译:在本文中,我们通过将MTL视为向量值函数(VFS)的学习过程来研究多任务学习(MTL)的泛化性能。我们将回答两个理论上的问题,给出一个小尺寸训练样本:1)MTL比单任务学习更好的情况(STL)在什么条件下? 2)MTL在什么条件下保证在学习期间所有任务的一致性?与基于传统的任务求和的MTL相比,vf表单的引入使我们能够检测每个任务的行为和MTL中的任务组相关性。具体而言,任务组相关性检查某些任务的成功(或失败)如何影响其他任务的性能。通过导出VFS的特定偏差和对称化不等式,我们获得了对关节概率的上限的MTL的泛化,即至少有一个具有大的概括间隙的任务。要回答第一个问题,我们讨论了任务组之间的协同相关性如何影响MTL的泛化性能,并显示MTL胜过STL,如果几乎任何一对互补任务组主要是协同的协同作用。此外,要回答第二个问题,我们呈现了足够的条件以保证MTL中每个任务的一致性,这要求每个任务的函数类不应具有高复杂性。此外,我们的调查结果提供了一种策略来检查任务设置是否享有MTL的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号