【24h】

An improved recurrent neural networks for 3d object reconstruction

机译:一种改进的3D对象重建的经常性神经网络

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

3D-R2N2 and other advanced 3D reconstruction neural networks have achieved impressive results, however most of them still suffer from training difficulties and detail losing, due to their weak feature extraction capability and improper loss function. This paper aims to overcome these shortcomings and defects by building a brand new model based on 3D-R2N2. The new model adopts densely connected structure as encoder, and utilizes Chamfer Distance as loss function. The aim is to enhance the learning ability of the network for complex data, meanwhile, make the focus of the whole network rest on the reconstruction of detail structures. In addition, we also made an improved decoder by building two parallel predictor branches to make better use of the feature information and boost the network's performance on reconstruction task. Through extensive tests, the results show that our proposed model called 3D-R2N2-V2 is slightly slower than 3D-R2N2 in predicting speed, but it can be 20% to 30% faster than 3D-R2N2 in training speed and obtain 15% and 10% better voxel IoU results on both single- and multi-view reconstruction tasks, respectively. Compared with other recent state-of-the-art methods like OGN and DRC, the reconstruction effect of our approach is also competitive.
机译:3D-R2N2和其他先进的3D重建神经网络已经取得了令人印象深刻的结果,但由于其弱特征提取能力和损耗功能,大多数仍然遭受训练困难和细节的细节。本文旨在通过基于3D-R2N2构建全新模型来克服这些缺点和缺陷。新模型采用密集连接的结构作为编码器,并利用倒角距离作为损耗功能。同时,提高网络网络的学习能力,同时,使整个网络的重点放在细节结构的重建上。此外,我们还通过构建两个并行预测器分支进行改进的解码器,以更好地利用特征信息,并提高网络在重建任务上的性能。通过广泛的测试,结果表明,我们所提出的型号称为3D-R2N2-V2的预测速度略微慢,但它可以比3D-R2N2在训练速度快20%至30%,并获得15% 10%更好的体素IOS分别导致单个和多视图重建任务。与其他最新的最先进方法相比,如OGN和DRC,我们方法的重建效果也是竞争力的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号