首页> 外文期刊>ACM Transactions on Graphics >Painting-to-3D Model Alignment via Discriminative Visual Elements
【24h】

Painting-to-3D Model Alignment via Discriminative Visual Elements

机译:通过区分性视觉元素进行绘画到3D模型对齐

获取原文
获取原文并翻译 | 示例

摘要

This article describes a technique that can reliably align arbitrary 2D depictions of an architectural site, including drawings, paintings, and historical photographs, with a 3D model of the site. This is a tremendously difficult task, as the appearance and scene structure in the 2D depictions can be very different from the appearance and geometry of the 3D model, for example, due to the specific rendering style, drawing error, age, lighting, or change of seasons. In addition, we face a hard search problem: the number of possible alignments of the painting to a large 3D model, such as a partial reconstruction of a city, is huge. To address these issues, we develop a new compact representation of complex 3D scenes. The 3D model of the scene is represented by a small set of discriminative visual elements that are automatically learned from rendered views. Similar to object detection, the set of visual elements, as well as the weights of individual features for each element, are learned in a discriminative fashion. We show that the learned visual elements are reliably matched in 2D depictions of the scene despite large variations in rendering style (e.g., watercolor, sketch, historical photograph) and structural changes (e.g., missing scene parts, large occluders) of the scene. We demonstrate an application of the proposed approach to automatic rephotography to find an approximate viewpoint of historical paintings and photographs with respect to a 3D model of the site. The proposed alignment procedure is validated via a human user study on a new database of paintings and sketches spanning several sites. The results demonstrate that our algorithm produces significantly better alignments than several baseline methods.
机译:本文介绍了一种技术,该技术可以可靠地将建筑工地的任意2D描绘(包括绘图,绘画和历史照片)与场地的3D模型对齐。这是一项非常艰巨的任务,因为2D描绘中的外观和场景结构可能与3D模型的外观和几何形状非常不同,例如,由于特定的渲染样式,绘制错误,使用年限,光照或更改而导致的季节。此外,我们还面临着一个艰苦的搜索问题:将绘画与大型3D模型进行对齐的可能性非常大,例如城市的局部重建。为了解决这些问题,我们开发了一种复杂的3D场景的新的紧凑表示形式。场景的3D模型由一小组可区分的视觉元素表示,这些元素可从渲染视图中自动学习。与对象检测类似,以区分的方式学习视觉元素集以及每个元素的单个特征的权重。我们显示,尽管场景的渲染样式(例如,水彩,素描,历史照片)和结构变化(例如,缺少场景部分,大型遮挡物)存在很大差异,但学习到的视觉元素在场景的2D描绘中可靠匹配。我们演示了该方法在自动摄影中的应用,以找到有关该站点的3D模型的历史绘画和照片的近似视点。拟议的对齐程序通过人类用户对跨多个站点的绘画和素描新数据库的研究得到验证。结果表明,与几种基线方法相比,我们的算法产生的对齐效果明显更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号