首页> 外文会议>International Conference on Computer Vision >Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement
【24h】

Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement

机译:Deep CG2Real:通过图像分解实现从合成到真实的转换

获取原文

摘要

We present a method to improve the visual realism of low-quality, synthetic images, e.g. OpenGL renderings. Training an unpaired synthetic-to-real translation network in image space is severely under-constrained and produces visible artifacts. Instead, we propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image. Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets, and further increases the realism of the textures and shading with an improved CycleGAN network. Extensive evaluations on the SUNCG indoor scene dataset demonstrate that our approach yields more realistic images compared to other state-of-the-art approaches. Furthermore, networks trained on our generated ``real'' images predict more accurate depth and normals than domain adaptation approaches, suggesting that improving the visual realism of the images can be more effective than imposing task-specific losses.
机译:我们提出一种方法来改善低质量的合成图像的视觉逼真度,例如OpenGL渲染。在图像空间中训练不成对的合成到真实的翻译网络受到严重限制,并产生可见的伪像。取而代之的是,我们提出了一种半监督方法,该方法适用于图像的未纠缠阴影和反照率层。我们的两阶段流水线首先学会了以基于物理的渲染为目标,以有监督的方式预测准确的着色,并通过改进的CycleGAN网络进一步提高了纹理和着色的真实性。对SUNCG室内场景数据集的广泛评估表明,与其他最新方法相比,我们的方法可获得更逼真的图像。此外,在我们生成的``真实''图像上训练的网络比域适应方法预测的深度和法线更准确,这表明改善图像的视觉真实感比施加特定于任务的损失更有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号