首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops >OmniLayout: Room Layout Reconstruction from Indoor Spherical Panoramas
【24h】

OmniLayout: Room Layout Reconstruction from Indoor Spherical Panoramas

机译:Omnilayout:室内球面全景房间布局重建

获取原文

摘要

Given a single RGB panorama, the goal of 3D layout reconstruction is to estimate the room layout by predicting the corners, floor boundary, and ceiling boundary. A common approach has been to use standard convolutional networks to predict the corners and boundaries, followed by post-processing to generate the 3D layout. However, the space-varying distortions in panoramic images are not compatible with the translational equivariance property of standard convolutions, thus degrading performance. Instead, we propose to use spherical convolutions. The resulting network, which we call OmniLayout performs convolutions directly on the sphere surface, sampling according to inverse equirectangular projection and hence invariant to equirectangular distortions. Using a new evaluation metric, we show that our network reduces the error in the heavily distorted regions (near the poles) by ≈ 25% when compared to standard convolutional networks. Experimental results show that OmniLayout outperforms the state-of-the-art by ≈4% on two different benchmark datasets (PanoContext and Stanford 2D-3D). Code is available at https://github.com/rshivansh/OmniLayout.
机译:给定单个RGB全景,3D布局重建的目标是通过预测角落,地板边界和天花板边界来估计房间布局。一种常见的方法是使用标准卷积网络来预测角落和边界,然后是后处理以生成3D布局。然而,全景图像中的空间变化的扭曲与标准卷曲的平移设备性质不兼容,从而降低性能。相反,我们建议使用球形卷曲。所得到的网络,我们呼叫omnilayout在球体表面上直接执行卷积,根据逆昼夜投影进行采样,因此不变于均匀的扭曲。使用新的评估度量,我们表明我们的网络在与标准卷积网络相比时,我们的网络在重扭曲地区(在极侧的区域附近)的错误降低了≈25%。实验结果表明,OmniLayout在两个不同的基准数据集(Panocontext和Stanford 2D-3D)上以≈4%的最先进而表现出最先进的。代码可在https://github.com/rshivansh/omnilayout获得。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号