首页> 外国专利> Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects

Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects

机译:捕获和表示面部表情和其他动画对象的3D几何,颜色和阴影的方法和系统

摘要

The method captures a 3D model of a face, which includes a 3D mesh and a series of deformations of the mesh that define changes in position of the mesh over time (e.g., for each frame). The method also builds a texture map associated with each frame in an animation sequence. The method achieves significant advantages by using markers on an actor's face to track motion of the face over time and to establish a relationship between the 3D model and texture. Specifically, videos of an actor's face with markers are captured from multiple cameras. Stereo matching is used to derive 3D locations of the markers in each frame. A 3D scan is also performed on the actor's face with the markers to produce an initial mesh with markers. The markers from the 3D scan are matched with the 3D locations of the markers in each frame from the stereo matching process. The method determines how the position of the mesh changes from frame to frame by matching the 3D locations of the markers from one frame to the next. The method derives textures for each frame by removing the dots from the video data, finding a mapping between texture space and the 3D space of the mesh, and combining the camera views for each frame into a signal texture map. The data needed to represent facial animation includes: 1) an initial 3D mesh, 2) 3D deformations of the mesh per frame, and 3) a texture map associated with each deformation. The method compresses 3D geometry by decomposing the deformation data into basis vectors and coefficients. The method compresses the textures using video compression.
机译:该方法捕获面部的3D模型,该模型包括3D网格和一系列网格变形,这些变形定义了网格位置随时间(例如,对于每个帧)的变化。该方法还构建与动画序列中的每个帧关联的纹理贴图。该方法通过使用演员面部上的标记跟踪面部随时间的运动并建立3D模型与纹理之间的关系而获得了明显的优势。具体而言,从多个摄像机捕获带有标记的演员面部视频。立体匹配用于导出每个帧中标记的3D位置。还使用标记在演员的脸上执行3D扫描,以生成带有标记的初始网格。来自3D扫描的标记与立体匹配过程中每一帧中标记的3D位置进行匹配。该方法通过将标记的3D位置从一帧匹配到下一帧来确定网格的位置在帧之间如何变化。该方法通过从视频数据中删除点,找到纹理空间和网格的3D空间之间的映射,并将每个帧的相机视图组合为信号纹理贴图,来为每个帧导出纹理。表示面部动画所需的数据包括:1)初始3D网格,2)每帧的3D网格变形,以及3)与每个变形关联的纹理贴图。该方法通过将变形数据分解为基本矢量和系数来压缩3D几何形状。该方法使用视频压缩来压缩纹理。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号