首页> 外文会议>International Workshop on Analysis and Modelling of Faces and Gestures(AMFG 2007); 20071020; Rio de Janeiro(BR) >Person-Independent Monocular Tracking of Face and Facial Actions with Multilinear Models
【24h】

Person-Independent Monocular Tracking of Face and Facial Actions with Multilinear Models

机译:多线性模型的人与人无关的面部和面部动作单眼跟踪

获取原文
获取原文并翻译 | 示例

摘要

In tracking face and facial actions of unknown people, it is essential to take into account two components of facial shape variations: shape variation between people and variation caused by different facial actions such as facial expressions. This paper presents a monocular method of tracking faces and facial actions using a multilinear face model that treats interpersonal and intraper-sonal shape variations separately. We created this method using a multilinear face model by integrating two different frameworks: particle filter-based tracking for time-dependent facial action and pose estimation and incremental bundle adjustment for person-dependent shape estimation. This unique combination together with multilinear face models is the key to tracking faces and facial actions of arbitrary people in real time with no pre-learned individual face models. Experiments using real video sequences demonstrate the effectiveness of our method.
机译:在跟踪未知人的面部和面部动作时,必须考虑面部形状变化的两个组成部分:人与人之间的形状变化以及由不同面部动作(例如面部表情)引起的变化。本文提出了一种使用多线性人脸模型跟踪人脸和面部动作的单眼方法,该模型分别处理人际和人内形状变化。我们通过整合两个不同的框架,使用多线性人脸模型创建了该方法:基于粒子过滤器的跟踪,用于与时间有关的面部动作和姿势估计,以及增量束调整,用于与人有关的形状估计。这种独特的组合与多线性人脸模型是在不预先学习单个人脸模型的情况下实时跟踪任意人的脸部和面部动作的关键。使用真实视频序列进行的实验证明了我们方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号