首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >Model-Free Tracking With Deep Appearance and Motion Features Integration
【24h】

Model-Free Tracking With Deep Appearance and Motion Features Integration

机译:无模型跟踪,具有深度外观和运动功能集成

获取原文

摘要

Being able to track an anonymous object, a model-free tracker is comprehensively applicable regardless of the target type. However, designing such a generalized framework is challenged by the lack of object-oriented prior information. As one solution, a real-time model-free object tracking approach is designed in this work relying on Convolutional Neural Networks (CNNs). To overcome the object-centric information scarcity, both appearance and motion features are deeply integrated by the proposed AMNet, which is an end-to-end offline trained two-stream network. Between the two parallel streams, the ANet investigates appearance features with a multi-scale Siamese atrous CNN, enabling the tracking-by-matching strategy. The MNet achieves deep motion detection to localize anonymous moving objects by processing generic motion features. The final tracking result at each frame is generated by fusing the output response maps from both sub-networks. The proposed AMNet reports leading performance on both OTB and VOT benchmark datasets with favorable real-time processing speed.
机译:无论目标类型如何,都可以跟踪匿名对象,可以全面适用。然而,设计这种广义框架被缺乏面向对象的先前信息挑战。作为一种解决方案,在依赖于卷积神经网络(CNNS)的工作中设计了一种实时模型的对象跟踪方法。为了克服以对象为中心的信息稀缺性,外观和运动功能都被提出的AMNET深入整合,这是一个端到端的离线训练的双流网络。在两个并行流之间,ANET通过多级SIDESE CNN调查外观特征,从而实现跟踪逐个策略。 MNET通过处理通用运动功能来实现深度运动检测以使匿名移动物体定位。通过熔断来自两个子网络的输出响应映射来生成每个帧的最终跟踪结果。拟议的AMNET在OTB和VOT基准数据集中报告了具有有利实时处理速度的领先性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号