首页> 外文期刊>ACM Transactions on Graphics >Relighting Humans: Occlusion-Aware Inverse Rendering for Full-Body Human Images
【24h】

Relighting Humans: Occlusion-Aware Inverse Rendering for Full-Body Human Images

机译:照亮人类:完整的人体图像的遮挡感知逆向渲染。

获取原文
获取原文并翻译 | 示例

摘要

Relighting of human images has various applications in image synthesis.For relighting, we must infer albedo, shape, and illumination from a humanportrait. Previous techniques rely on human faces for this inference, basedon spherical harmonics (SH) lighting. However, because they often ignorelight occlusion, inferred shapes are biased and relit images are unnaturallybright particularly at hollowed regions such as armpits, crotches, or garmentwrinkles. This paper introduces the first attempt to infer light occlusion in theSH formulation directly. Based on supervised learning using convolutionalneural networks (CNNs), we infer not only an albedo map, illumination butalso a light transport map that encodes occlusion as nine SH coefficients perpixel. The main difficulty in this inference is the lack of training datasetscompared to unlimited variations of human portraits. Surprisingly, geometricinformation including occlusion can be inferred plausibly even with a smalldataset of synthesized human figures, by carefully preparing the dataset sothat the CNNs can exploit the data coherency. Our method accomplishesmore realistic relighting than the occlusion-ignored formulation.
机译:人类图像的重新照明在图像合成中有多种应用。对于重新照明,我们必须从人类肖像推断出反照率,形状和照明度。先前的技术基于球形谐波(SH)照明,依靠人脸进行推断。但是,由于它们经常忽略光遮挡,因此推断的形状会出现偏差,并且重新排列的图像会异常自然明亮,尤其是在空心区域(例如腋下,pit部或衣服上的皱纹)处。本文介绍了直接推断SH配方中光遮挡的首次尝试。基于使用卷积神经网络(CNN)进行的监督学习,我们不仅可以推断反照率图,照度,还可以推断出将遮挡编码为每个像素九个SH系数的光传输图。这种推论的主要困难是缺乏与人像无限变化相比的训练数据集。出人意料的是,通过精心准备数据集以便CNN可以利用数据一致性,即使使用合成的人像的小数据集,也可以合理地推断出包括遮挡在内的几何信息。与忽略闭塞的公式相比,我们的方法可实现更真实的重新照明。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号