首页> 外文会议>International Conference on Signals and Systems >3D model retrieval based on deep Autoencoder neural networks
【24h】

3D model retrieval based on deep Autoencoder neural networks

机译:基于深度自动化器神经网络的3D模型检索

获取原文

摘要

The rapid growth of 3D model resources for 3D printing has created an urgent need for 3D model retrieval systems. Benefiting from the evolution of hardware devices, visualized 3D models can be easily rendered using a tablet computer or handheld mobile device. In this paper, we present a novel 3D model retrieval method involving view-based features and deep learning. Because 2D images are highly distinguishable, constructing a 3D model from multiple 2D views is one of the most common methods of 3D model retrieval. Normalization is typically challenging and time-consuming for view-based retrieval methods; however, this work utilized an unsupervised deep learning technique, called Autoencoder, to refine compact view-based features. Therefore, the proposed method is rotation-invariant, requiring only the normalization of the translation and the scale of the 3D models in the dataset. For robustness, we applied Fourier descriptors and Zernike moments to represent the 2D features. The experimental results testing our method on the online Princeton Shape Benchmark Dataset demonstrate more accurate retrieval performance than other existing methods.
机译:3D打印3D模型资源的快速增长为3D模型检索系统创造了迫切需要。受益于硬件设备的演变,可以使用平板电脑或手持式移动设备轻松呈现可视化3D模型。在本文中,我们提出了一种涉及基于视图的特征和深度学习的新型3D模型检索方法。因为2D图像是高度可区分的,所以从多个2D视图构建3D模型是3D模型检索的最常用方法之一。归一化通常是基于视图的检索方法的挑战性和耗时;然而,这项工作利用了一个无监督的深度学习技术,称为AutoEncoder,以优化基于紧凑的视图的特征。因此,所提出的方法是旋转不变的,只需要在数据集中的平移和3D模型的标准的归一化。对于稳健性,我们应用了傅立叶描述符和Zernike时刻来表示2D功能。实验结果测试我们在线普林斯顿形状基准数据集的方法,证明了比其他现有方法更准确的检索性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号