首页> 外文会议>2012 Sixth International Conference on Distributed Smart Cameras. >Maximum-likelihood object tracking from multi-view video by combining homography and epipolar constraints
【24h】

Maximum-likelihood object tracking from multi-view video by combining homography and epipolar constraints

机译:通过结合单应性和对极约束,从多视点视频跟踪最大似然率

获取原文
获取原文并翻译 | 示例

摘要

This paper addresses problem of object tracking in occlusion scenarios, where multiple uncalibrated cameras with overlapping fields of view are used. We propose a novel method where tracking is first done independently for each view and then tracking results are mapped between each pair of views to improve the tracking in individual views, under the assumptions that objects are not occluded in all views and move uprightly on a planar ground which may induce a homography relation between each pair of views. The tracking results are mapped by jointly exploiting the geometric constraints of homography, epipolar and vertical vanishing point. Main contributions of this paper include: (a) formulate a reference model of multi-view object appearance using region covariance for each view; (b) define a likelihood measure based on geodesics on a Riemannian manifold that is consistent with the destination view by mapping both the estimated positions and appearances of tracked object from other views; (c) locate object in each individual view based on maximum likelihood criterion from multi-view estimations of object position. Experiments have been conducted on videos from multiple uncalibrated cameras, where targets experience long-term partial or full occlusions. Comparison with two existing methods and performance evaluations are also made. Test results have shown effectiveness of the proposed method in terms of robustness against tracking drifts caused by occlusions.
机译:本文解决了遮挡场景中的对象跟踪问题,在这种场景中,使用了多个具有重叠视场的未校准相机。我们提出了一种新颖的方法,该方法首先针对每个视图独立进行跟踪,然后在每个对象对都没有遮挡并在平面上直立移动的假设下,在每对视图之间映射跟踪结果以改善单个视图中的跟踪可能在每对视图之间引起单应性关系的地面。通过联合利用单应性,对极和垂直消失点的几何约束来绘制跟踪结果。本文的主要贡献包括:(a)使用每个视图的区域协方差制定多视图对象外观的参考模型; (b)通过映射来自其他视图的被跟踪对象的估计位置和外观,在与目标视图一致的黎曼流形上基于大地测量来定义似然性度量; (c)根据对对象位置的多视图估计得出的最大似然准则,在每个单独的视图中定位对象。已经对来自多个未经校准的摄像机的视频进行了实验,其中目标经历了长期的部分或完全遮挡。还与两种现有方法进行了比较并评估了性能。测试结果已经证明了该方法在抵抗由遮挡引起的跟踪漂移的鲁棒性方面的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号