首页> 外文会议>Conference on Neural Information Processing Systems >Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
【24h】

Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

机译:场景表示网络:连续3D结构感知神经场景表示

获取原文
获取外文期刊封面目录资料

摘要

Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. While geometric deep learning has explored 3D-structure-aware representations of scene geometry, these models typically require explicit 3D supervision. Emerging neural scene representations can be trained only with posed 2D images, but existing methods ignore the three-dimensional structure of scenes. We propose Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-to-end from only 2D images and their camera poses, without access to depth or shape. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.
机译:与生成模型的无监督学习有可能发现3D场景丰富的表现。虽然几何深度学习已经探索了场景几何的3D结构感知表示,但这些模型通常需要显式的3D监督。新兴的神经场景表示只能培训,只能培训2D图像,但现有方法忽略了场景的三维结构。我们提出了场景表示网络(SRNS),一个连续的3D结构感知场景表示,用于编码几何和外观。 SRN表示作为映射世界坐标到本地场景属性的特征表示的持续函数的场景。通过将图像形成作为可分辨率的射线算法,可以从仅2D图像和它们的相机姿势训练SRNS的端到端,而无需进入深度或形状。该制剂自然地拓展了跨场景,在过程中学习强大的几何形状和外观前沿。我们通过评估新颖的观看合成,几次射击重建,关节形状和外观插值来证明SRNS的潜力,以及无刚性面模型的无监督发现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号