首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >Active and dynamic information fusion for facial expression understanding from image sequences
【24h】

Active and dynamic information fusion for facial expression understanding from image sequences

机译:主动和动态信息融合,可从图像序列中了解面部表情

获取原文
获取原文并翻译 | 示例

摘要

This paper explores the use of multisensory information fusion technique with dynamic Bayesian networks (DBN) for modeling and understanding the temporal behaviors of facial expressions in image sequences. Our facial feature detection and tracking based on active IR illumination provides reliable visual information under variable lighting and head motion. Our approach to facial expression recognition lies in the proposed dynamic and probabilistic framework based on combining DBN with Ekman's facial action coding system (FACS) for systematically modeling the dynamic and stochastic behaviors of spontaneous facial expressions. The framework not only provides a coherent and unified hierarchical probabilistic framework to represent spatial and temporal information related to facial expressions, but also allows us to actively select the most informative visual cues from the available information sources to minimize the ambiguity in recognition. The recognition of facial expressions is accomplished by fusing not only from the current visual observations, but also from the previous visual evidences. Consequently, the recognition becomes more robust and accurate through explicitly modeling temporal behavior of facial expression. In this paper, we present the theoretical foundation underlying the proposed probabilistic and dynamic framework for facial expression modeling and understanding. Experimental results demonstrate that our approach can accurately and robustly recognize spontaneous facial expressions from an image sequence under different conditions.
机译:本文探索了将多传感器信息融合技术与动态贝叶斯网络(DBN)结合使用,以建模和理解图像序列中面部表情的时间行为。我们基于主动红外照明的面部特征检测和跟踪在可变照明和头部运动下提供了可靠的视觉信息。我们进行面部表情识别的方法是基于DBN与Ekman的面部动作编码系统(FACS)相结合提出的动态概率框架,以系统地建模自发面部表情的动态和随机行为。该框架不仅提供了一个连贯且统一的分层概率框架来表示与面部表情有关的时空信息,而且还使我们能够从可用信息源中主动选择最具信息性的视觉线索,以最大程度地减少识别上的歧义。面部表情的识别不仅通过融合当前的视觉观察结果,而且还融合了先前的视觉证据来实现。因此,通过显式建模面部表情的时间行为,识别变得更加鲁棒和准确。在本文中,我们为提出的面部表情建模和理解的概率和动态框架提供了理论基础。实验结果表明,我们的方法可以在不同条件下从图像序列中准确,可靠地识别自发的面部表情。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号