首页> 外文会议>International Conference on Image and Vision Computing New Zealand >Face Stabilization by Mode Pursuit for Avatar Construction
【24h】

Face Stabilization by Mode Pursuit for Avatar Construction

机译:通过模式追求实现人脸稳定化

获取原文

摘要

Avatars driven by facial motion capture are widely used in games and movies, and may become the foundation of future online virtual reality social spaces. In many of these applications, it is necessary to disambiguate the rigid motion of the skull from deformations due to changing facial expression. This is required so that the expression can be isolated, analyzed, and transferred to the virtual avatar. The problem of identifying the skull motion is partially addressed through the use of a headset or helmet that is assumed to be rigid relative to the skull. However, the headset can slip when a person is moving vigorously on a motion capture stage or in a virtual reality game. More fundamentally, on some people even the skin on the sides and top of the head moves during extreme facial expressions, resulting in the headset shifting slightly. Accurate conveyance of facial deformation is important for conveying emotions, so a better solution to this problem is desired. In this paper, we observe that although every point on the face is potentially moving, each tracked point or vertex returns to a neutral or “rest” position frequently as the responsible muscles relax. When viewed from the reference frame of the skull, the histograms of point positions over time should therefore show a concentrated mode at this rest position. On the other hand, the mode is obscured or destroyed when tracked points are viewed in a coordinate frame that is corrupted by the overall rigid motion of the head. Thus, we seek a smooth sequence of rigid transforms that cause the vertex motion histograms to reveal clear modes. To solve this challenging optimization problem, we use a coarse-to-fine strategy in which smoothness is guaranteed by the parameterization of the solution. We validate the results on both professionally created synthetic animations in which the ground truth is known, and on dense 4D computer vision capture of real humans. The results are clearly superior to alternative approaches such as assuming the existence of stationary points on the skin, or using rigid iterated closest points.
机译:由面部动作捕捉驱动的化身已广泛用于游戏和电影中,并可能成为未来在线虚拟现实社交空间的基础。在许多这些应用中,有必要使颅骨的刚性运动与因面部表情变化而引起的变形消除歧义。这是必需的,以便可以将表达式分离,分析并转移到虚拟化身。识别头骨运动的问题可通过使用头戴式耳机或头盔来部分解决,该头戴式耳机或头盔被认为相对于头骨是刚性的。但是,当一个人在动作捕捉阶段或虚拟现实游戏中剧烈运动时,耳机可能会打滑。从根本上讲,在某些人中,即使在极端的面部表情中,甚至侧面和头顶的皮肤也会移动,从而导致头戴式耳机略微移位。面部变形的准确传递对于传递情绪非常重要,因此需要一种更好的解决方案。在本文中,我们观察到,尽管脸上的每个点都可能在移动,但随着负责的肌肉放松,每个跟踪的点或顶点都会频繁返回中性或“静止”位置。因此,从头骨的参考框架查看时,点位置随时间的直方图应显示在此静止位置的集中模式。另一方面,当在坐标框架中查看跟踪点时,该模式会被遮盖或破坏,该坐标框架会因头部的整体刚性运动而损坏。因此,我们寻求刚性变换的平滑序列,以使顶点运动直方图显示清晰的模式。为了解决这个具有挑战性的优化问题,我们使用了从粗到精的策略,其中通过解决方案的参数化保证了平滑度。我们在已知基础事实的专业合成动画以及对真实人类的密集4D计算机视觉捕获中验证结果。结果显然优于替代方法,例如,假设皮肤上存在固定点或使用刚性迭代最近点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号