...
首页> 外文期刊>Circuits and Systems for Video Technology, IEEE Transactions on >Extracting Depth and Radiance From a Defocused Video Pair
【24h】

Extracting Depth and Radiance From a Defocused Video Pair

机译:从散焦视频对中提取深度和辐射度

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

We present a novel iterative feedback approach for the simultaneous estimation of depth and all-in-focus (AIF) videos from a defocused video pair by joint spatiotemporal optimization. Depth and AIF videos benefit each other in the iterative optimization. First, for the recovery of AIF video, the sparse prior of natural video is incorporated to ensure a high-quality defocus blur removal even under inaccurate depth estimation. Second, in depth estimation step, we feed back the spatial and temporal constraints from the high-quality AIF video and adopt a numerical solution, which is robust to the inaccuracy of AIF recovery to further boost the performance of depth from the defocus algorithm. Benefitting from the incorporation of AIF video priors and the temporal consistency constraint, the proposed framework can effectively reconstruct the depth of the textureless region and is insensitive to camera parameter changes. Our approach provides better temporal consistency and higher depth accuracy than the conventional method that applies postsmoothing to the sequential frame estimation. We not only demonstrate the feasibility of our approach via real experimentation but also provide visual and quantitative evaluation on synthetic data.
机译:我们提出了一种新颖的迭代反馈方法,用于通过联合时空优化同时估计离焦视频对的深度和全焦点(AIF)视频。深度和AIF视频在迭代优化中彼此受益。首先,为了恢复AIF视频,即使在深度估计不准确的情况下,也要结合自然视频的稀疏先验以确保高质量的散焦模糊消除。其次,在深度估计步骤中,我们从高质量的AIF视频中反馈空间和时间约束,并采用数值解决方案,该方法可解决AIF恢复的不准确性,从而进一步提高了散焦算法的深度性能。得益于AIF视频先验的结合和时间一致性约束,所提出的框架可以有效地重建无纹理区域的深度,并且对摄像机参数的变化不敏感。与将平滑后处理应用于顺序帧估计的常规方法相比,我们的方法提供了更好的时间一致性和更高的深度精度。我们不仅通过实际实验证明了我们方法的可行性,而且还提供了对合成数据的视觉和定量评估。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号