...
首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >Three-step action search networks with deep Q-learning for real-time object tracking
【24h】

Three-step action search networks with deep Q-learning for real-time object tracking

机译:三步动作搜索网络,具有深度Q-Learning进行实时对象跟踪

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Sliding window and candidate sampling are two widely used search strategies for visual object tracking, but they are far behind real-time. By treating the tracking problem as a three-step decision-making process, a novel tracking network, which explores only three small subsets of candidate regions, is developed to achieve faster (real-time) localization of the target object along the frames in a video. A convolutional neural network agent is formulated to interact with a video over time, and two action-value functions are exploited to learn a favorable policy off-line to determine the best action for visual object tracking. Our model is trained in a collaborative learning way by using action classification and cumulative reward approximation in reinforcement learning. We have evaluated our proposed tracker against a number of state-of-the-art ones over three popular tracking benchmarks including OTB-2013, OTB-2015, and VOT2017. The experimental results have demonstrated that our proposed method can achieve very competitive performance on real-time object tracking. (C) 2020 Elsevier Ltd. All rights reserved.
机译:滑动窗口和候选抽样是两个广泛使用的视觉对象跟踪的搜索策略,但它们远远落后于实时。通过将跟踪问题视为三步决策过程,开发了一种仅探索三个小亚候选区域的小亚群的跟踪网络,以实现目标对象沿着帧的更快(实时)定位视频。配制卷积神经网络代理以随着时间的推移与视频交互,并且利用两个动作值函数来学习有利的策略离线以确定可视对象跟踪的最佳动作。我们的型号通过使用钢筋学习中的动作分类和累积奖励近似以协同学习方式培训。我们在三个流行的跟踪基准中评估了我们拟议的跟踪器,包括OTB-2013,OTB-2015和VOT2017的三个流行的跟踪基准。实验结果表明,我们的提出方法可以在实时对象跟踪上实现非常竞争的性能。 (c)2020 elestvier有限公司保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号