首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
【24h】

3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions

机译:3DMatch:从RGB-D重构中学习局部几何描述符

获取原文

摘要

Matching local geometric features on real-world depth images is a challenging task due to the noisy, low-resolution, and incomplete nature of 3D scan data. These difficulties limit the performance of current state-of-art methods, which are typically based on histograms over geometric properties. In this paper, we present 3DMatch, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data. To amass training data for our model, we propose a self-supervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Code, data, benchmarks, and pre-trained models are available online at http://3dmatch.cs.princeton.edu.
机译:由于3D扫描数据的嘈杂,低分辨率和不完整的特性,因此在现实世界的深度图像上匹配局部几何特征是一项艰巨的任务。这些困难限制了当前最先进方法的性能,这些方法通常基于几何特性的直方图。在本文中,我们介绍3DMatch,这是一个数据驱动的模型,该模型学习局部体积补丁描述符以建立部分3D数据之间的对应关系。为了为我们的模型积累训练数据,我们提出了一种自我监督的特征学习方法,该方法利用了在现有RGB-D重建中发现的数百万个对应标签。实验表明,我们的描述符不仅能够匹配新场景中的局部几何形状以进行重建,而且还能推广到不同的任务和空间尺度(例如,针对Amazon Picking Challenge的实例级对象模型对齐以及网格表面对应)。结果表明,3DMatch始终明显领先于其他最新方法。代码,数据,基准和预先训练的模型可从http://3dmatch.cs.princeton.edu在线获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号