首页> 外文期刊>Circuits and Systems for Video Technology, IEEE Transactions on >Shape-From-Focus Depth Reconstruction With a Spatial Consistency Model
【24h】

Shape-From-Focus Depth Reconstruction With a Spatial Consistency Model

机译:空间一致性模型的焦点到焦点深度重构

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

This paper presents a maximum a posteriori (MAP) framework to incorporate a spatial consistency prior model for depth reconstruction in the shape-from-focus (SFF) process. Existing SFF techniques, which reconstruct a dense 3-D depth from multifocus image frames, usually have poor performance over low-contrast regions and usually need a large number of frames to achieve satisfactory results. To overcome these problems, a new depth reconstruction process is proposed to estimate the depth values by solving an MAP estimation problem with the inclusion of a spatial consistency model. This consistency model assumes that within a local region, the depth value of each pixel can be roughly predicted by an affine transformation of the image features at that pixel. A local learning process is proposed to construct the consistency model directly from the multifocus image sequence. By adopting this model, the depth values can be inferred in a more robust way, especially over low-contrast regions. In addition, to improve the computational efficiency, a cell-based version of the MAP framework is proposed. Experimental results demonstrate the effective improvement in accuracy and robustness as compared with existing approaches over real and synthesized image data. In addition, experimental results also demonstrate that the proposed method can achieve quite impressive performance, even with only the use of a few image frames.
机译:本文提出了一种最大的后验(MAP)框架,该框架结合了空间一致性先验模型以用于从焦点形状(SFF)进行深度重建。现有的从多焦点图像帧重建密集3D深度的SFF技术通常在低对比度区域的性能较差,并且通常需要大量帧才能获得令人满意的结果。为了克服这些问题,提出了一种新的深度重建过程,以通过解决包含空间一致性模型的MAP估计问题来估计深度值。该一致性模型假设在局部区域内,每个像素的深度值可以通过对该像素处的图像特征进行仿射变换来大致预测。提出了一种局部学习过程,直接从多焦点图像序列构建一致性模型。通过采用此模型,可以以更可靠的方式推断深度值,尤其是在低对比度区域。另外,为了提高计算效率,提出了基于单元的MAP框架版本。实验结果表明,与现有方法相比,对真实和合成图像数据的准确性和鲁棒性得到了有效提高。此外,实验结果还表明,即使仅使用几个图像帧,所提出的方法也可以实现相当出色的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号