...
首页> 外文期刊>Information Sciences: An International Journal >Learning reinforced attentional representation for end-to-end visual tracking
【24h】

Learning reinforced attentional representation for end-to-end visual tracking

机译:学习终端视觉跟踪的加强注意力表示

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Although numerous recent tracking approaches have made tremendous advances in the last decade, achieving high-performance visual tracking remains a challenge. In this paper, we propose an end-to-end network model to learn reinforced attentional representation for accurate target object discrimination and localization. We utilize a novel hierarchical attentional module with long short-term memory and multi-layer perceptrons to leverage both inter- and intra-frame attention to effectively facilitate visual pattern emphasis. Moreover, we incorporate a contextual attentional correlation filter into the backbone network to make our model trainable in an end-to-end fashion. Our proposed approach not only takes full advantage of informative geometries and semantics but also updates correlation filters online without fine-tuning the backbone network to enable the adaptation of variations in the target objects appearance. Extensive experiments conducted on several popular benchmark datasets demonstrate that our proposed approach is effective and computationally efficient. (C) 2019 Elsevier Inc. All rights reserved.
机译:虽然近期最近的追踪方法在过去十年中取得了巨大进步,但实现了高性能的视觉跟踪仍然是一个挑战。在本文中,我们提出了一个端到端网络模型,以学习加强的注意力表示,以获得准确的目标对象歧视和本地化。我们利用具有长短期记忆和多层的分层注意力和多层感知模块,以利用帧内和内部内部注意力,以有效地促进视觉模式强调。此外,我们将一个上下文的注意力相关滤波器纳入骨干网络,以使我们的模型以端到端的方式培训。我们提出的方法不仅可以充分利用信息性几何和语义,还可以在没有微调骨干网络的情况下在线更新关联过滤器,以便启用目标对象外观中的变化的适应。在几个流行的基准数据集上进行的广泛实验表明我们的提出方法是有效和计算效率。 (c)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号