【24h】

Bayesian reconstruction of 3D shapes and scenes from a single image

机译:从单个图像进行3D形状和场景的贝叶斯重构

获取原文

摘要

It is common experience for human vision to perceive full 3D shape and scene from a single 2D image with the occluded parts "filled-in" by prior visual knowledge. We represent prior knowledge of 3D shapes and scenes by probabilistic models at two levels - both are defined on graphs. The first level model is built on a graph representation for single objects, and it is a mixture model for both man-made block objects such as trees and grasses. It assumes surface and boundary smoothness, 3D angle symmetry etc. The second level model is built on the relation graph of all objects in a scene. It assumes that objects should be supported for maximum stability with global bounding surfaces, such as ground, sky and walls. Given an input image, we extract the geometry and photometric structures through image segmentation and sketching, and represent them in a big graph. Then we partition the graph into subgraphs each being an object, infer the 3D shape and recover occluded surfaces, edges and vertices in each subgraph, and infer the scene structures between the recovered 3D sub-graphs. The inference algorithm samples from the prior model under the constraint that it reproduces the observed image/sketch under projective geometry.
机译:对于人类视觉来说,通常的经验是从单个2D图像中感知完整的3D形状和场景,并通过先验视觉知识将被遮挡的部分“填满”。我们通过两个级别的概率模型来表示3D形状和场景的先验知识-两者均在图形上定义。一级模型建立在单个对象的图形表示之上,并且是两个人造块对象(例如树木和草丛)的混合模型。它假定表面和边界的平滑度,3D角度对称性等。第二级模型建立在场景中所有对象的关系图上。它假定应使用全局边界表面(例如地面,天空和墙壁)来支撑对象,以实现最大的稳定性。给定输入图像,我们通过图像分割和草图绘制来提取几何和光度结构,并在一个大图中表示它们。然后,我们将图划分为每个子图,每个子图都是一个对象,推断3D形状并恢复每个子图中的遮挡面,边和顶点,并推断恢复的3D子图之间的场景结构。推理算法在先验模型下重现了投影几何形状下的观察到的图像/草图的约束条件下进行采样。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号