首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >Model-Free Tracking With Deep Appearance and Motion Features Integration
【24h】

Model-Free Tracking With Deep Appearance and Motion Features Integration

机译:具有深层外观和运动功能集成的无模型跟踪

获取原文

摘要

Being able to track an anonymous object, a model-free tracker is comprehensively applicable regardless of the target type. However, designing such a generalized framework is challenged by the lack of object-oriented prior information. As one solution, a real-time model-free object tracking approach is designed in this work relying on Convolutional Neural Networks (CNNs). To overcome the object-centric information scarcity, both appearance and motion features are deeply integrated by the proposed AMNet, which is an end-to-end offline trained two-stream network. Between the two parallel streams, the ANet investigates appearance features with a multi-scale Siamese atrous CNN, enabling the tracking-by-matching strategy. The MNet achieves deep motion detection to localize anonymous moving objects by processing generic motion features. The final tracking result at each frame is generated by fusing the output response maps from both sub-networks. The proposed AMNet reports leading performance on both OTB and VOT benchmark datasets with favorable real-time processing speed.
机译:能够跟踪匿名对象,无模型跟踪器可全面适用,而与目标类型无关。但是,由于缺乏面向对象的先验信息,设计这样一个通用框架受到了挑战。作为一种解决方案,该工作中基于卷积神经网络(CNN)设计了一种实时的无模型对象跟踪方法。为了克服以对象为中心的信息稀缺性,所提出的AMNet是端到端离线训练的两流网络,将外观和运动功能都进行了深度集成。在两个并行流之间,ANet使用多尺度连体Atrous CNN来研究外观特征,从而实现按匹配跟踪策略。 MNet通过处理通用运动特征来实现深度运动检测,以定位匿名运动对象。通过融合来自两个子网的输出响应图来生成每个帧的最终跟踪结果。拟议的AMNet报告了OTB和VOT基准数据集的领先性能,并具有良好的实时处理速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号