首页> 外文期刊>Image and Vision Computing >Online learning and fusion of orientation appearance models for robust rigid object tracking
【24h】

Online learning and fusion of orientation appearance models for robust rigid object tracking

机译:在线学习和方向外观模型的融合,以实现强大的刚性对象跟踪

获取原文
获取原文并翻译 | 示例

摘要

We introduce a robust framework for learning and fusing of orientation appearance models based on both texture and depth information for rigid object tracking. Our framework fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depth cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the data representation: angles. More specifically, our framework combines image gradient orientations as extracted from intensity images with the directions of surface normals computed from dense depth fields. We propose to capture the correlations between the obtained orientation appearance models using a fusion approach motivated by the original Active Appearance Models (AAMs). To incorporate these features in a learning framework, we use a robust kernel based on the Euler representation of angles which does not require off-line training, and can be efficiently implemented online. The robustness of learning from orientation appearance models is presented both theoretically and experimentally in this work. This kernel enables us to cope with gross measurement errors, missing data as well as other typical problems such as illumination changes and occlusions. By combining the proposed models with a particle filter, the proposed framework was used for performing 2D plus 3D rigid object tracking, achieving robust performance in very difficult tracking scenarios including extreme pose variations.
机译:我们引入了一个强大的框架,用于基于纹理和深度信息来学习和融合方向外观模型,以进行刚性对象跟踪。我们的框架融合了从标准视觉摄像机获得的数据和由低成本消费者深度摄像机(例如Kinect)获得的密集深度图。为了结合这两种完全不同的方式,我们建议使用不依赖于数据表示的特征:角度。更具体地说,我们的框架将从强度图像中提取的图像梯度方向与从密集深度场计算出的表面法线方向相结合。我们建议使用原始主动外观模型(AAM)激励的融合方法来捕获所获得的方向外观模型之间的相关性。为了将这些功能整合到学习框架中,我们使用了基于角度的欧拉表示的健壮内核,该内核不需要脱机培训,并且可以在线高效实现。在这项工作中,从理论上和实验上都介绍了从定向外观模型学习的鲁棒性。该内核使我们能够应对总的测量误差,数据丢失以及其他典型问题,例如照明变化和遮挡。通过将提出的模型与粒子滤波器组合在一起,提出的框架用于执行2D加3D刚性对象跟踪,从而在包括极端姿势变化在内的非常困难的跟踪情况下实现了强大的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号