首页> 外文会议>International Conference on 3D Vision >3DTNet: Learning Local Features Using 2D and 3D Cues
【24h】

3DTNet: Learning Local Features Using 2D and 3D Cues

机译:3DTNet:使用2D和3D提示学习局部特征

获取原文

摘要

We present an approach to learn 3D local descriptor by combining both 2D texture and 3D geometric information, which can be used to register partial 3D data for a variety of vision applications. Unlike previous approaches which simply concatenate features learned from multiple sources into one feature descriptor, we learn 2D and 3D feature representations jointly. We design a network, 3DTNet with an architecture particularly designed for learning robust local feature representation leveraging both texture and geometric information. Two types of information are interacted with each other which results in more robust and stable feature representation. Finally, feature representations of multi-scale neighborhoods are aggregated to further improve the performance of feature matching. Extensive experimental results show that our method outperforms state-of-art 2D or 3D descriptors in terms of both accuracy and efficiency.
机译:我们提出了一种通过结合2D纹理和3D几何信息来学习3D局部描述符的方法,该方法可用于为各种视觉应用注册部分3D数据。与以前的将多个来源的特征简单地组合为一个特征描述符的方法不同,我们可以共同学习2D和3D特征表示。我们设计了一个3DTNet网络,该网络具有专门设计用于利用纹理和几何信息来学习强大的局部特征表示的体系结构。两种类型的信息相互交互,从而可以实现更健壮和稳定的特征表示。最后,将多尺度邻域的特征表示进行汇总,以进一步提高特征匹配的性能。大量的实验结果表明,我们的方法在准确性和效率方面都优于最新的2D或3D描述符。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号