首页> 外文会议>Ubiquitous Positioning, Indoor Navigation and Location-Based Services >Unsupervised Stereo Depth Estimation Refined by Perceptual Loss
【24h】

Unsupervised Stereo Depth Estimation Refined by Perceptual Loss

机译:通过感知损失改进无监督立体声深度估计

获取原文

摘要

Depth of the object has long been a critical information in mobile robot filed and computer vision. In recent years, binocular depth estimation based on supervised learning with deep convolutional neural network has seen huge success when compared with traditional or unsupervised methods. Despite all this, unsupervised depth estimation methods still need further study because they conquer the vast quantities collection of corresponding ground truth depth data for training. To resolve this, methods based on semi-supervised learning are proposed, where stereo images are reconstructed according to predicted disparities. Compared with supervised learning, the maximum restriction is the ill-posed problem of image color similarity between the reconstructed image and the input color image. To improve this problem, in this paper we combine the more robust perceptual loss with image color loss to encourage the similarity between the images feature representations extracted from another convolutional neural network. Benefited of the both losses, we improve the stereo depth estimation accuracy proposed by Godard et al. on KITTI benchmark.
机译:物体的深度一直是移动机器人领域和计算机视觉中的关键信息。近年来,与传统或无监督方法相比,基于监督学习和深度卷积神经网络的双目深度估计取得了巨大的成功。尽管如此,无监督的深度估计方法仍需要进一步研究,因为它们征服了相应的地面真实深度数据的大量收集以进行训练。为了解决这个问题,提出了一种基于半监督学习的方法,其中根据预测的视差重建立体图像。与监督学习相比,最大的限制是重构图像和输入彩色图像之间图像颜色相似性的不适定问题。为了改善这个问题,在本文中,我们将更健壮的感知损失与图像颜色损失结合起来,以鼓励从另一个卷积神经网络提取的图像特征表示之间的相似性。受益于这两个损耗,我们提高了Godard等人提出的立体声深度估计精度。在KITTI基准上。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号