首页> 外文期刊>IEEE Transactions on Cognitive and Developmental Systems >Multitask Learning for Object Localization With Deep Reinforcement Learning
【24h】

Multitask Learning for Object Localization With Deep Reinforcement Learning

机译:具有深度加固学习的对象本地化的多任务学习

获取原文
获取原文并翻译 | 示例

摘要

In object localization, methods based on a top-down search strategy that focus on learning a policy have been widely researched. The performance of these methods relies heavily on the policy in question. This paper proposes a deep Q-network (DQN) that employs a multitask learning method to localize class-specific objects. This DQN agent consists of two parts, an action executor part and a terminal part. The action executor determines the action that the agent should perform and the terminal decides whether the agent has detected the target object. By taking advantage of the capability of feature learning in a multitask method, our method combines these two parts by sharing hidden layers and trains the agent using multitask learning. A detection dataset from the PASCAL visual object classes challenge 2007 was used to evaluate the proposed method, and the results show that it can achieve higher average precision with fewer search steps than similar methods.
机译:在对象本地化中,基于专注于学习政策的自上而下的搜索策略的方法已被广泛研究。这些方法的表现严重依赖于有关的政策。本文提出了一个深度Q-network(DQN),它采用多任务学习方法来本地化特定于类对象。该DQN代理包括两个部分,动作执行器部件和终端部分。操作执行程序确定代理应该执行的操作,终端决定代理是否已检测到目标对象。通过利用多任务方法中特征学习的能力,我们的方法通过共享隐藏层来组合这两个部分并使用多任务学习列车。 Pascal Visual对象类挑战2007中的检测数据集用于评估所提出的方法,结果表明它可以实现比类似方法更少的搜索步骤更高的平均精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号