首页> 外文会议>IEEE International Symposium on Mixed and Augmented Reality Adjunct >Compact Object Representation of a Non-Rigid Object for Real-Time Tracking in AR Systems
【24h】

Compact Object Representation of a Non-Rigid Object for Real-Time Tracking in AR Systems

机译:AR系统中用于实时跟踪的非刚性对象的紧凑对象表示

获取原文

摘要

Detecting moving objects in the real world with reliability, robustness and efficiency is an essential but difficult task in AR applications, especially for interactions between virtual agents and real pedestrians, motorcycles and more, where the spatial occupancy of non-rigid objects should be perceived. In this paper, a novel object tracking method using visual cues with pre-training is proposed to track dynamic objects in 2D online videos robustly and reliably. The object's area in images can be transformed to 3D spatial area in the physical world with some simple, well-defined constraints and priors, thus spatial collision of agents and pedestrians can be avoided in AR environments. To achieve robust tracking in a markerless AR environment, we first create a novel representation of non-rigid objects, which is actually the manifold of normalized sub-images of all the possible appearances of the target object. These sub-images, captured from multiple views and under varying lighting conditions, are free from any occlusion and can be obtained from both video sequences and synthetic image generation. Then, from the instance pool made up of these sub-images, a compact set of templates which can well represent the manifold is learned by our proposed iterative method using sparse dictionary learning. We ensure that this template set is complete by using an SVM-based sparsity detection method. This compact, complete set of templates is then used to track the target trajectory online in video and augmented reality (AR) systems. Experiments demonstrate the robustness and efficiency of our method.
机译:在AR应用程序中,以可靠性,鲁棒性和效率来检测现实世界中的移动物体是一项必不可少但艰巨的任务,尤其是对于虚拟代理人与真实行人,摩托车等之间的交互,其中应感知非刚性物体的空间占用。在本文中,提出了一种使用视觉线索进行预训练的新颖对象跟踪方法,该方法可以可靠,可靠地跟踪2D在线视频中的动态对象。图像中的对象区域可以通过一些简单的,明确定义的约束和先验条件转换为物理世界中的3D空间区域,从而可以避免在AR环境中代理商和行人的空间碰撞。为了在无标记的AR环境中实现鲁棒的跟踪,我们首先创建非刚性对象的新颖表示,它实际上是目标对象所有可能出现的归一化子图像的流形。这些从多个视图捕获并在变化的光照条件下捕获的子图像没有任何遮挡,可以从视频序列和合成图像生成中获得。然后,从我们由这些子图像组成的实例池中,通过我们提出的使用稀疏字典学习的迭代方法,学习了可以很好地表示流形的紧凑模板集。我们使用基于SVM的稀疏性检测方法来确保此模板集完整。然后,使用此紧凑,完整的模板集在线跟踪视频和增强现实(AR)系统中的目标轨迹。实验证明了该方法的鲁棒性和有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号