首页> 外文会议>International Conference on Biomedical Engineering and Informatics >Regression based joint subspace learning for multi-view facial shape synthesis
【24h】

Regression based joint subspace learning for multi-view facial shape synthesis

机译:基于回归的联合子空间学习用于多视图面部形状合成

获取原文

摘要

Multi-view facial image synthesis is an important issue in computer graphics, 3D facial image reconstruction and accurate face recognition. In this paper, we propose a regression based joint subspace learning method (RJSL) for automatic multi-view facial shape synthesis. This method synthesizes multiview facial shapes from one input facial image. In conventional joint subspace learning based multi-view facial image synthesis, the coefficients estimated by using the input image is directly used for multi-view facial image synthesis. In our proposed method, the coefficients are estimated by a regression method based on the coefficients of the input facial image. We first construct a original multi-view facial database. The different view image pair (e.g. 0 degree and 15 degree, 0 degree and −15 degree) are connected as a joint vector for corresponding subspace learning. The training data are divided into two groups: one for joint subspace learning and another one for regression of coefficients. And our proposed method trains by shape information and texture information. In this paper, the shape information is expressed by feature points. And the texture information is expressed by luminosity values of normalized facial image. Our proposed method uses the luminosity value as depth information. In experimental step, this method trains pseudo 3-dimentional shape information (x, y-axises: feature points, z-axis: luminosity values). Our proposed method realizes accurate multi-view facial shape synthesis by these two contributions.
机译:多视图人脸图像合成是计算机图形学,3D人脸图像重建和准确人脸识别中的重要问题。在本文中,我们提出了一种基于回归的联合子空间学习方法(RJSL),用于自动多视图面部形状合成。该方法从一个输入的面部图像合成多视图面部形状。在基于传统联合子空间学习的多视图面部图像合成中,通过使用输入图像估计的系数直接用于多视图面部图像合成。在我们提出的方法中,系数是根据输入的人脸图像的系数通过回归方法估算的。我们首先构建一个原始的多视图面部数据库。连接不同视点图像对(例如0度和15度,0度和-15度)作为联合向量,以进行相应的子空间学习。训练数据分为两组:一组用于联合子空间学习,另一组用于系数回归。并且我们提出的方法通过形状信息和纹理信息进行训练。在本文中,形状信息由特征点表示。并且,纹理信息由归一化的面部图像的亮度值表示。我们提出的方法使用亮度值作为深度信息。在实验步骤中,此方法训练伪的3维形状信息(x,y轴:特征点,z轴:光度值)。我们提出的方法通过这两个贡献实现了准确的多视图面部形状合成。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号