首页> 外文会议>IEEE International Symposium on Mixed and Augmented Reality >Pixel-wise closed-loop registration in video-based augmented reality
【24h】

Pixel-wise closed-loop registration in video-based augmented reality

机译:基于视频的增强现实中的像素级闭环配准

获取原文

摘要

In Augmented Reality (AR), visible misregistration can be caused by many inherent error sources, such as errors in tracking, calibration, and modeling. In this paper we present a novel pixel-wise closed-loop registration framework that can automatically detect and correct registration errors using a reference model comprised of the real scene model and the desired virtual augmentations. Registration errors are corrected in both global world space via camera pose refinement, and local screen space via pixel-wise corrections, resulting in spatially accurate and visually coherent registration. Specifically we present a registration-enforcing model-based tracking approach that weights important image regions while refining the camera pose estimates (from any conventional tracking method) to achieve better registration, even in the case of modeling errors. To deal with remaining errors, which can be rigid or non-rigid, we compute the optical flow between the camera image and the real model image rendered with the refined pose, enabling direct screen-space pixel-wise corrections to misregistration. The estimated flow field can be applied to improve registration in two distinct ways: (1) forward warping of modeled on-real-object-surface augmentations (e.g., object re-texturing) into the camera image, leading to surface details that are not present in the virtual object; and (2) backward warping of the camera image into the real scene model, preserving the full use of the dense geometry buffer (depth in particular) provided by the combined real-virtual model for registration, leading to pixel accurate real-virtual occlusion. We discuss the trade-offs between, and different use cases of, forward and backward warping with model-based tracking in terms of specific properties for registration. We demonstrate the efficacy of our approach with both simulated and real data.
机译:在增强现实(AR)中,可见的重合失调可能是由许多固有的误差源引起的,例如跟踪,校准和建模中的误差。在本文中,我们提出了一种新颖的像素级闭环配准框架,该框架可以使用包含真实场景模型和所需虚拟扩充物的参考模型自动检测并纠正配准错误。配准错误通过相机姿态优化在全局世界空间中进行了校正,并通过逐像素校正在局部屏幕空间中得到了校正,从而实现了空间上精确且视觉上连贯的配准。具体来说,我们提出了一种基于配准执行模型的跟踪方法,该方法可以对重要的图像区域进行加权,同时完善相机的姿态估计值(来自任何传统的跟踪方法),以实现更好的配准,即使在建模错误的情况下也是如此。为了处理可能是刚性的或非刚性的剩余误差,我们计算了摄像机图像与以精炼姿势渲染的真实模型图像之间的光通量,从而实现了针对失配的直接屏幕空间像素方向校正。可以使用估计的流场以两种不同的方式来改善配准:(1)将建模的真实对象表面增强(例如,对象重新纹理化)正向翘曲到相机图像中,从而导致表面细节不清晰存在于虚拟对象中; (2)将摄像机图像向后扭曲到真实场景模型中,从而保留了由组合的真实虚拟模型提供的密集几何缓冲区(尤其是深度)的充分利用,以进行配准,从而实现了像素精确的真实虚拟遮挡。我们根据注册的特定属性,讨论了基于模型的跟踪的正向和反向扭曲之间的权衡以及不同的用例。我们通过模拟和真实数据证明了我们方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号