首页> 外文会议>International Conference on Systems and Informatics >Depth Learning Method of Many Light Rendering based on Matrix Row and Column Sampling
【24h】

Depth Learning Method of Many Light Rendering based on Matrix Row and Column Sampling

机译:基于矩阵行和列采样的多光源渲染深度学习方法

获取原文

摘要

In this paper, a deep learning method for multi light source rendering based on matrix row and column sampling is proposed, which includes the following steps: Step 1, establish a light matrix according to the scene, in which each column represents all sampling points illuminated by one light source, and each row represents one sampling point illuminated by all light sources; step 2, randomly select several rows from the multi light source matrix to form a shrinkage Subtraction matrix; step 3, drawing quadratic random reduction matrix and primary random reduction matrix respectively for different viewpoints; step 4, training a depth neural network by using the image pairs drawn by primary reduction matrix and quadratic reduction matrix. Step 5, using the trained depth neural network, in the real-time high reality rendering, first draw the matrix image obtained by the second reduction, then input the image to the depth neural network, the output image of the depth neural network is the complete rendering image we want to get. The invention transforms the multi light source rendering problem under the complex scene into the training and learning problem of the depth neural network, obtains the better rendering image through the processing of the depth neural network, and improves the rendering efficiency and real-time performance. It can be applied to scenes with real-time and high quality requirements.
机译:本文提出了一种基于矩阵行和列采样的多光源渲染深度学习方法,包括以下步骤:步骤1,根据场景建立一个光矩阵,其中每一列代表所有被照亮的采样点一个光源,每一行代表一个被所有光源照亮的采样点。步骤2,从多光源矩阵中随机选择几行,形成收缩减法矩阵。第三步,针对不同的观点分别绘制二次随机约简矩阵和初级随机约简矩阵。步骤4,通过使用一次约简矩阵和二次约简矩阵绘制的图像对训练深度神经网络。步骤5,使用经过训练的深度神经网络,在实时高逼真渲染中,首先绘制通过第二次约简得到的矩阵图像,然后将图像输入到深度神经网络中,深度神经网络的输出图像为完成我们想要获取的渲染图像。本发明将复杂场景下的多光源渲染问题转化为深度神经网络的训练和学习问题,通过深度神经网络的处理获得更好的渲染图像,提高了渲染效率和实时性能。它可以应用于具有实时性和高质量要求的场景。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号