首页> 外文会议>IEEE International Conference on Computational Photography >Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation
【24h】

Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation

机译:从灰度与学习光学器件的深度,用于成像和遮挡感知深度估计

获取原文

摘要

Monocular depth estimation remains a challenging problem, despite significant advances in neural network architectures that leverage pictorial depth cues alone. Inspired by depth from defocus and emerging point spread function engineering approaches that optimize programmable optics end-to-end with depth estimation networks, we propose a new and improved framework for depth estimation from a single RGB image using a learned phase-coded aperture. Our optimized aperture design uses rotational symmetry constraints for computational efficiency, and we jointly train the optics and the network using an occlusion-aware image formation model that provides more accurate defocus blur at depth discontinuities than previous techniques do. Using this framework and a custom prototype camera, we demonstrate state-of-the art image and depth estimation quality among end-to-end optimized computational cameras in simulation and experiment.
机译:尽管神经网络架构的显着进展,单眼深度估计仍然是一个具有挑战性的问题。 灵感来自散焦和新兴点传播功能的深度,可以使用深度估计网络优化可编程光学端到端的可编程光学元件的方法,我们使用学习的相位编码的孔径提出了一种从单个RGB图像的深度估计的新的和改进的框架。 我们的优化光圈设计使用旋转对称约束来计算效率,我们使用遮挡感知图像形成模型共同列车,并在深度不连续性方面提供比以前的技术更准确的散焦模糊。 使用此框架和自定义原型摄像头,我们在模拟和实验中展示了端到端优化的计算摄像机之间的最先进的图像和深度估计质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号