首页> 外文会议>European conference on computer vision >Exploiting Contextual Motion Cues for Visual Object Tracking
【24h】

Exploiting Contextual Motion Cues for Visual Object Tracking

机译:利用用于视觉对象跟踪的上下文运动提示

获取原文

摘要

In this paper, we propose an algorithm for on-line, real-time tracking of arbitrary objects in videos from unconstrained environments. The method is based on a particle filter framework using different visual features and motion prediction models. We effectively integrate a discriminative on-line learning classifier into the model and propose a new method to collect negative training examples for updating the classifier at each video frame. Instead of taking negative examples only from the surroundings of the object region, or from specific distracting objects, our algorithm samples the negatives from a contextual motion density function. We experimentally show that this type of learning improves the overall performance of the tracking algorithm. Finally, we present quantitative and qualitative results on four challenging public datasets that show the robustness of the tracking algorithm with respect to appearance and view changes, lighting variations, partial occlusions as well as object deformations.
机译:在本文中,我们提出了一种从未定义环境中的视频中的在线,实时跟踪任意对象的算法。该方法基于使用不同的视觉特征和运动预测模型的粒子滤波器框架。我们有效地将鉴别的在线学习分类器集成到模型中,并提出了一种新方法来收集用于在每个视频帧处更新分类器的负训练示例。我们的算法从上下文运动密度函数中对凸出的对象进行采样,而不是仅从对象区域的周围的环境或特定的分散注意力来占用否定示例。我们通过实验表明,这种类型的学习提高了跟踪算法的整体性能。最后,我们在四个具有挑战性的公共数据集中呈现定量和定性结果,该数据集显示了跟踪算法关于外观的鲁棒性,以及查看变化,点亮变化,部分闭塞以及物体变形。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号