首页> 外文期刊>Journal of visual communication & image representation >Incremental object learning and robust tracking of multiple objects from RGB-D point set data
【24h】

Incremental object learning and robust tracking of multiple objects from RGB-D point set data

机译:增量对象学习以及从RGB-D点集数据对多个对象的强大跟踪

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we propose a novel model-free approach for tracking multiple objects from RGB-D point set data. This study aims to achieve the robust tracking of arbitrary objects against dynamic interaction cases in real-time. In order to represent an object without prior knowledge, the probability density of each object is represented by Gaussian mixture models (GMM) with a tempo-spatial topological graph (TSTG). A flexible object model is incrementally updated in the pro-posed tracking framework, where each RGB-D point is identified to be involved in each object at each time step. Furthermore, the proposed method allows the creation of robust temporal associations among multiple updated objects during split, complete occlusion, partial occlusion, and multiple contacts dynamic interaction cases. The performance of the method was examined in terms of the tracking accuracy and computational efficiency by various experiments, achieving over 97% accuracy with five frames per second computation time. The limitations of the method were also empirically investigated in terms of the size of the points and the movement speed of objects.
机译:在本文中,我们提出了一种新颖的无模型方法,用于从RGB-D点集数据中跟踪多个对象。这项研究的目的是针对动态交互情况实时实现对任意对象的鲁棒跟踪。为了在没有先验知识的情况下表示对象,每个对象的概率密度由具有时空拓扑图(TSTG)的高斯混合模型(GMM)表示。在提议的跟踪框架中,将灵活更新对象模型,在该模型中,每个时间步长处的每个RGB-D点都被标识为包含在每个对象中。此外,所提出的方法允许在分割,完全遮挡,部分遮挡和多个接触动态交互情况下的多个更新对象之间创建鲁棒的时间关联。通过各种实验对方法的性能进行了跟踪精度和计算效率方面的检验,以每秒5帧的计算时间实现了97%以上的精度。还根据点的大小和对象的移动速度对方法的局限性进行了经验研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号