首页> 外文会议>Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies >Robust Visual Object Tracking with Interleaved Segmentation
【24h】

Robust Visual Object Tracking with Interleaved Segmentation

机译:具有交错分割的强大视觉对象跟踪

获取原文
获取原文并翻译 | 示例

摘要

In this paper we present a new approach for tracking non-rigid, deformable objects by means of merging an on-line boosting-based tracker and a fast foreground background segmentation. We extend an on-line boosting-based tracker, which uses axes-aligned bounding boxes with fixed aspect-ratio as tracking states. By constructing a confidence map from the on-line boosting-based tracker and unifying this map with a confidence map, which is obtained from a foreground background segmentation algorithm, we build a superior confidence map. For constructing a rough confidence map of a new frame based on on-line boosting, we employ the responses of the strong classifier as well as the single weak classifier responses that were built before during the updating step. This confidence map provides a rough estimation of the object's position and dimension. In order to refine this confidence map, we build a fine, pixel-wisely segmented confidence map and merge both maps together. Our segmentation method is color-histogram-based and provides a fine and fast image segmentation. By means of back-projection and the Bayes' rule, we obtain a confidence value for every pixel. The rough and the fine confidence maps are merged together by building an adaptively weighted sum of both maps. The weights are obtained by utilizing the variances of both confidence maps. Further, we apply morphological operators in the merged confidence map in order to reduce the noise. In the resulting map we estimate the object localization and dimension via continuous adaptive mean shift. Our approach provides a rotated rectangle as tracking states, which enables a more precise description of non-rigid, deformable objects than axes-aligned bounding boxes. We evaluate our tracker on the visual object tracking (VOT) benchmark dataset 2016.
机译:在本文中,我们提出了一种新的方法,通过结合基于在线Boosting的跟踪器和快速的前景背景分割来跟踪非刚性,可变形的对象。我们扩展了一个基于Boosting的在线跟踪器,该跟踪器使用具有固定长宽比的轴对齐边界框作为跟踪状态。通过从基于在线加速的跟踪器构建置信度图,并使用从前景背景分割算法获得的置信度图统一此图,我们构建了一个更好的置信度图。为了基于在线增强构建新帧的粗略置信度图,我们采用了强分类器的响应以及在更新步骤之前构建的单个弱分类器的响应。该置信度图提供了对象位置和尺寸的粗略估计。为了完善此置信度图,我们构建了按像素细分的精细置信度图,并将两个图合并在一起。我们的分割方法是基于颜色直方图的,可提供精细而快速的图像分割。通过反投影和贝叶斯规则,我们可以获得每个像素的置信度值。通过建立两个地图的自适应加权总和,将粗糙和精细置信度地图合并在一起。通过利用两个置信度图的方差来获得权重。此外,我们在合并置信度图中应用形态学算子,以减少噪声。在生成的地图中,我们通过连续的自适应均值偏移来估计对象的定位和尺寸。我们的方法提供了一个旋转的矩形作为跟踪状态,与轴对齐的边界框相比,它可以更精确地描述非刚性,可变形的对象。我们在视觉对象跟踪(VOT)基准数据集2016上评估跟踪器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号