首页> 外文期刊>ACM Transactions on Graphics >Deep Blending for Free-Viewpoint Image-Based Rendering
【24h】

Deep Blending for Free-Viewpoint Image-Based Rendering

机译:深度融合可实现基于自由视点图像的渲染

获取原文
获取原文并翻译 | 示例
       

摘要

Free-viewpoint image-based rendering (IBR) is a standing challenge. IBRmethods combine warped versions of input photos to synthesize a novelview. The image quality of this combination is directly afected by geometricinaccuracies of multi-view stereo (MVS) reconstruction and by view- andimage-dependent efects that produce artifacts when contributions from differentinput views are blended. We present a new deep learning approach toblending for IBR, in which we use held-out real image data to learn blendingweights to combine input photo contributions. Our Deep Blending methodrequires us to address several challenges to achieve our goal of interactivefree-viewpoint IBR navigation.We irst need to provide suiciently accurategeometry so the Convolutional Neural Network (CNN) can succeed in indingcorrect blending weights. We do this by combining two diferent MVSreconstructions with complementary accuracy vs. completeness tradeofs.To tightly integrate learning in an interactive IBR system, we need to adaptour rendering algorithm to produce a ixed number of input layers thatcan then be blended by the CNN. We generate training data with a varietyof captured scenes, using each input photo as ground truth in a held-outapproach. We also design the network architecture and the training loss toprovide high quality novel view synthesis, while reducing temporal lickeringartifacts. Our results demonstrate free-viewpoint IBR in a wide varietyof scenes, clearly surpassing previous methods in visual quality, especiallywhen moving far from the input cameras.
机译:基于自由视点图像的渲染(IBR)是一项长期挑战。 IBR方法结合了输入照片的变形版本来合成Novelview。这种组合的图像质量直接受到多视图立体(MVS)重构的几何误差以及受视图和图像依赖的影响的影响,这些影响会在混合来自不同输入视图的贡献时产生伪像。我们提出了一种新的IBR深度学习融合方法,其中我们使用了保留的真实图像数据来学习混合权重,以结合输入的照片贡献。我们的深度融合方法要求我们解决一些挑战,以实现交互式无视点IBR导航的目标。我们首先需要提供足够准确的几何形状,以便卷积神经网络(CNN)可以成功地指出正确的混合权重。我们通过将两个不同的MVS重构与互补精度和完整性折衷相结合来实现此目的。要在交互式IBR系统中紧密集成学习,我们需要调整渲染算法以产生固定数量的输入层,然后CNN可以将其混合。我们使用各种输入的照片作为坚持不懈的方法中的基本事实,生成具有各种捕获场景的训练数据。我们还设计了网络体系结构和训练损失,以提供高质量的新颖视图合成,同时减少时间上的伪影。我们的结果证明了在各种场景中的自由视点IBR,尤其是在远离输入摄像机的情况下,其视觉质量明显优于以前的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号