首页> 外文会议>European conference on computer vision >ATGV-Net: Accurate Depth Super-Resolution
【24h】

ATGV-Net: Accurate Depth Super-Resolution

机译:ATGV-Net:精确的深度超分辨率

获取原文

摘要

In this work we present a novel approach for single depth map super-resolution. Modern consumer depth sensors, especially Time-of-Flight sensors, produce dense depth measurements, but are affected by noise and have a low lateral resolution. We propose a method that combines the benefits of recent advances in machine learning based single image super-resolution, i.e. deep convolutional networks, with a varia-tional method to recover accurate high-resolution depth maps. In particular, we integrate a variational method that models the piecewise affine structures apparent in depth data via an anisotropic total generalized variation regularization term on top of a deep network. We call our method ATGV-Net and train it end-to-end by unrolling the optimization procedure of the variational method. To train deep networks, a large corpus of training data with accurate ground-truth is required. We demonstrate that it is feasible to train our method solely on synthetic data that we generate in large quantities for this task. Our evaluations show that we achieve state-of-the-art results on three different benchmarks, as well as on a challenging Time-of-Flight dataset, all without utilizing an additional intensity image as guidance.
机译:在这项工作中,我们提出了一种用于单深度图超分辨率的新颖方法。现代的消费者深度传感器,尤其是飞行时间传感器,可以进行密集的深度测量,但是会受到噪声的影响,并且横向分辨率较低。我们提出了一种方法,该方法结合了基于机器学习的单图像超分辨率(即深度卷积网络)最新进展的优势与可变方法来恢复准确的高分辨率深度图。特别是,我们集成了一种变分方法,该方法通过深度网络顶部的各向异性总广义变化正则项对深度数据中出现的分段仿射结构进行建模。我们将我们的方法称为ATGV-Net,并通过展开变分方法的优化过程对其进行端到端训练。要训​​练深层网络,需要具有准确的地面真实性的大量训练数据。我们证明,仅针对我们为该任务大量生成的综合数据训练我们的方法是可行的。我们的评估表明,我们在三个不同的基准以及具有挑战性的飞行时间数据集上都获得了最新的结果,而所有这些都没有利用额外的强度图像作为指导。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号