首页> 外文期刊>IEEE Transactions on Image Processing >DP-Siam: Dynamic Policy Siamese Network for Robust Object Tracking
【24h】

DP-Siam: Dynamic Policy Siamese Network for Robust Object Tracking

机译:DP-SIAM:强大的对象跟踪的动态策略暹罗网络

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Balancing the trade-off between real-time performance and accuracy in object tracking is a major challenge. In this paper, a novel dynamic policy gradient Agent-Environment architecture with Siamese network (DP-Siam) is proposed to train the tracker to increase the accuracy and the expected average overlap while performing in real-time. DP-Siam is trained offline with reinforcement learning to produce a continuous action that predicts the optimal object location. DP-Siam has a novel architecture that consists of three networks: an Agent network to predict the optimal state (bounding box) of the object being tracked, an Environment network to get the Q-value during the offline training phase to minimize the error of the loss function, and a Siamese network to produce a heat-map. During online tracking, the Environment network acts as a verifier to the Agent network action. Extensive experiments are performed on six widely used benchmarks: OTB2013, OTB50, OTB100, VOT2015, VOT2016 and VOT2018. The results show that DP-Siam significantly outperforms the current state-of-the-art trackers.
机译:平衡对象跟踪中实时性能和准确性之间的权衡是一项重大挑战。在本文中,提出了一种具有暹罗网络(DP-Siam)的新型动态政策梯度代理环境架构,以培训跟踪器,以提高准确性和预期平均重叠在实时执行。 DP-SIAM接受了脱机的培训,加强学习,产生一种预测最佳对象位置的连续动作。 DP-SIAM具有三个网络的新架构:代理网络预测被跟踪的对象的最佳状态(边界框),环境网络在离线训练阶段期间获得Q值,以最小化错误损失函数,以及暹罗网络生产热映射。在线跟踪期间,环境网络充当代理网络操作的验证者。在六种广泛使用的基准上进行了广泛的实验:OTB2013,OTB50,OTB100,VOT2015,VOT2016和VOT2018。结果表明,DP-SIAM显着优于当前的最先进的跟踪器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号