首页> 外文期刊>The Visual Computer >Disparity estimation in stereo video sequence with adaptive spatiotemporally consistent constraints
【24h】

Disparity estimation in stereo video sequence with adaptive spatiotemporally consistent constraints

机译:具有自适应时空一致约束的立体声视频序列中的视差估计

获取原文
获取原文并翻译 | 示例
           

摘要

Numerous stereo matching algorithms have been proposed to obtain disparity estimation for a single pair of stereo images. However, simply even applying the best of them to temporal frames independently, i.e., without considering the temporal consistency between consecutive frames, may suffer from the undesirable artifacts. Here, we proposed an adaptive, spatiotemporally consistent, constraints-based systematic method that generates spatiotemporally consistent disparity maps for stereo video image sequences. Firstly, a reliable temporal neighborhood is used to enforce the "self-similarity" assumption and prevent errors caused by false optical flow matching from propagating between consecutive frames. Furthermore, we formulate the adaptive temporal predicted disparity map as prior knowledge of the current frame. It is used as a soft constraint to enhance the temporal consistency of disparities, increase the robustness to luminance variance, and restrict the range of the potential disparities for each pixel. Additionally, to further strengthen smooth variation of disparities, the adaptive temporal segment confidence is incorporated as a soft constraint to reduce ambiguities caused by under- and over-segmentation, and retain the disparity discontinuities that align with 3D object boundaries from geometrically smooth, but strong color gradient regions. Experimental evaluations demonstrate that our method significantly improves the spatiotemporal consistency both quantitatively and qualitatively compared with other state-of-the-art methods on the synthetic DCB and realistic KITTI datasets.
机译:已经提出了许多立体匹配算法来获得针对一对立体图像的视差估计。然而,即使简单地将最好的它们独立地应用于时间帧,即,不考虑连续帧之间的时间一致性,也可能遭受不期望的伪像的困扰。在这里,我们提出了一种自适应的,时空一致的,基于约束的系统方法,该方法为立体视频图像序列生成时空一致的视差图。首先,使用可靠的时间邻域来强制执行“自相似”假设,并防止由错误的光流匹配引起的错误在连续帧之间传播。此外,我们将自适应时间预测视差图公式化为当前帧的先验知识。它用作软约束,以增强视差的时间一致性,提高对亮度变化的鲁棒性,并限制每个像素的潜在视差范围。此外,为了进一步加强视差的平滑变化,将自适应时间段置信度作为一种软约束纳入其中,以减少由分段不足和过度分割引起的歧义,并保留与3D对象边界对齐的视差不连续性,使其在几何上平滑但坚固颜色渐变区域。实验评估表明,与合成DCB和实际KITTI数据集上的其他最新方法相比,我们的方法在数量和质量上均显着提高了时空一致性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号