【24h】

Talking avatar for web-based interfaces

机译:基于Web界面的语音头像

获取原文

摘要

In this paper we present an approach for creating interactive and speaking avatar models, based on standard face images. We have started from a 3D human face model that can be adjusted to a particular face. In order to adjust the 3D model from a 2D image, a new method with 2 steps is presented. First, a process based on Procrustes analysis is applied in order to find the best match for input key points, obtaining the rotation, translation and scale needed to best fit the model to the photo. Then, using the resulting model we refine the face mesh by applying a linear transform on each vertex. In terms of visual speech animation, we have considered a total of 15 different positions to accurately model the articulation of Portuguese language — the visemes. For normalization purposes, each viseme is defined from the generic neutral face. The animation process is visually represented with linear time interpolation, given a sequence of visemes and its instants of occurrence.
机译:在本文中,我们提出了一种基于标准面部图像创建交互式和语音化身模型的方法。我们从可以调整为特定面孔的3D人脸模型开始。为了从2D图像调整3D模型,提出了一种具有2个步骤的新方法。首先,应用基于Procrustes分析的过程,以便找到输入关键点的最佳匹配,获得使模型最适合照片所需的旋转,平移和缩放比例。然后,使用生成的模型,通过在每个顶点上应用线性变换来优化面网格。在视觉语音动画方面,我们考虑了总共15个不同的位置,以准确地模拟葡萄牙语-义位发音。为了规范化,从普通中性面定义每个视位素。给定一系列视位和其出现时刻,动画处理以线性时间插值直观地表示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号