【24h】

Attentive Multi-task Deep Reinforcement Learning

机译:专注多任务深度强化学习

获取原文

摘要

Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task/transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
机译:在多任务设置中分享任务之间的知识对于高效学习至关重要。然而,到目前为止,大多数研究都集中在知识转移不是有害的更容易案例中,即,从一个任务中的知识不能对另一个任务产生负面影响的知识产生负面影响。相比之下,我们基于关注的关注对多任务深度加强学习的方法,不需要关于任务之间的关系的任何A-Priori假设。我们的注意网络在状态级粒度上自动将任务知识分组为子网。因此,如果可能的情况下,它可以实现积极的知识转移,并在任务干扰的情况下避免负转移。我们对两个最先进的多任务/转移学习方法测试我们的算法,并在需要更少的网络参数时显示可比或卓越的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号