...
首页> 外文期刊>ACM Transactions on Graphics >High-Fidelity Facial Reflectance and Geometry Inference From an Unconstrained Image
【24h】

High-Fidelity Facial Reflectance and Geometry Inference From an Unconstrained Image

机译:不受约束的图像的高保真面部反射率和几何推断

获取原文
获取原文并翻译 | 示例
           

摘要

We present a deep learning-based technique to infer high-quality facial reflectance and geometry given a single unconstrained image of the subject, which may contain partial occlusions and arbitrary illumination conditions. The reconstructed high-resolution textures, which are generated in only a few seconds, include high-resolution skin surface reflectance maps, representing both the diffuse and specular albedo, and medium-and high-frequency displacement maps, thereby allowing us to render compelling digital avatars under novel lighting conditions. To extract this data, we train our deep neural networks with a high-quality skin reflectance and geometry database created with a state-of-the-art multi-view photometric stereo system using polarized gradient illumination. Given the raw facial texture map extracted from the input image, our neural networks synthesize complete reflectance and displacement maps, as well as complete missing regions caused by occlusions. The completed textures exhibit consistent quality throughout the face due to our network architecture, which propagates texture features from the visible region, resulting in high-fidelity details that are consistent with those seen in visible regions. We describe how this highly underconstrained problem is made tractable by dividing the full inference into smaller tasks, which are addressed by dedicated neural networks. We demonstrate the effectiveness of our network design with robust texture completion from images of faces that are largely occluded. With the inferred reflectance and geometry data, we demonstrate the rendering of high-fidelity 3D avatars from a variety of subjects captured under different lighting conditions. In addition, we perform evaluations demonstrating that our method can infer plausible facial reflectance and geometric details comparable to those obtained from high-end capture devices, and outperform alternative approaches that require only a single unconstrained input image.
机译:我们提出了一种基于深度学习的技术,可以在给定单个不受约束的对象图像的情况下推断出高质量的面部反射率和几何形状,其中可能包含部分遮挡和任意照明条件。重建后的高分辨率纹理仅在几秒钟内生成,其中包括代表漫反射和镜面反射率的高分辨率皮肤表面反射率贴图,以及中高频位移图,从而使我们能够绘制出引人注目的数字图像化身在新颖的光照条件下。为了提取这些数据,我们使用高质量的皮肤反射率和几何数据库来训练我们的深度神经网络,该数据库是通过使用偏振梯度照明的最新的多视图光度立体系统创建的。给定从输入图像中提取的原始面部纹理图,我们的神经网络将合成完整的反射率和位移图,以及由于遮挡而导致的完整缺失区域。由于我们的网络体系结构,完成的纹理在整个面部上呈现出一致的质量,该体系结构从可见区域传播纹理特征,从而产生与在可见区域中看到的一致的高保真细节。我们描述了如何通过将完整的推理分为较小的任务来解决这个高度不足的问题,这些任务由专用的神经网络解决。我们从被遮挡的面部图像中通过强大的纹理补全来证明我们的网络设计的有效性。利用推断的反射率和几何数据,我们演示了在不同光照条件下捕获的来自各种对象的高保真3D化身的渲染。此外,我们进行的评估表明,我们的方法可以推断出与从高端捕获设备获得的可比拟的面部反射率和几何细节,并且胜过仅需单个不受约束输入图像的替代方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号