...
首页> 外文期刊>Journal of Applied Research and Technology >A low cost framework for real-time marker based 3-D human expression modeling
【24h】

A low cost framework for real-time marker based 3-D human expression modeling

机译:基于实时标记的3-D人类表达建模的低成本框架

获取原文
           

摘要

Abstract This work presents a robust, and low-cost framework for real-time marker based 3-D human expression modeling using off-the-shelf stereo web-cameras and inexpensive adhesive markers applied to the face. The system has low computational requirements, runs on standard hardware, and is portable with minimal set-up time and no training. It does not require a controlled lab environment (lighting or set-up) and is robust under varying conditions, i.e. illumination, facial hair, or skin tone variation. Stereo web-cameras perform 3-D marker tracking to obtain head rigid motion and the non-rigid motion of expressions. Tracked markers are then mapped onto a 3-D face model with a virtual muscle animation system. Muscle inverse kinematics update muscle contraction parameters based on marker motion in order to create a virtual character’s expression performance. The parametrization of the muscle-based animation encodes a face performance with little bandwidth. Additionally, a radial basis function mapping approach was used to easily remap motion capture data to any face model. In this way the automated creation of a personalized 3-D face model and animation system from 3-D data is elucidated. The expressive power of the system and its ability to recognize new expressions was evaluated on a group of test subjects with respect to the six universally recognized facial expressions. Results show that the use of abstract muscle definition reduces the effect of potential noise in the motion capture data and allows the seamless animation of any virtual anthropomorphic face model with data acquired through human face performance.
机译:摘要这项工作为基于实时标记的3-D人类表情建模提供了一个健壮且低成本的框架,该模型使用了现成的立体网络摄像头和应用于面部的廉价粘合剂。该系统具有较低的计算要求,可在标准硬件上运行,并且具有可移植性,且设置时间最短且无需培训。它不需要受控的实验室环境(照明或设置),并且在变化的条件下(例如,光照,面部毛发或肤色变化)都非常坚固。立体网络摄像机执行3-D标记跟踪,以获取头部的刚性运动和表情的非刚性运动。然后使用虚拟肌肉动画系统将跟踪的标记映射到3-D面部模型。肌肉逆运动学根据标记运动来更新肌肉收缩参数,以创建虚拟角色的表情表现。基于肌肉的动画的参数化编码具有很少带宽的面部表情。另外,径向基函数映射方法用于轻松地将运动捕获数据重新映射到任何面部模型。以此方式阐明了从3-D数据自动创建个性化的3-D面部模型和动画系统的过程。关于一组六个普遍认可的面部表情,在一组测试对象上评估了系统的表达能力及其识别新表情的能力。结果表明,使用抽象的肌肉定义可以减少运动捕获数据中潜在噪声的影响,并允许通过人脸性能获取的数据无缝模拟任何虚拟拟人化面部模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号