首页> 外文会议>International Conference on Automatic Face and Gesture Recognition >Multimodal Deep Feature Aggregation for Facial Action Unit Recognition using Visible Images and Physiological Signals
【24h】

Multimodal Deep Feature Aggregation for Facial Action Unit Recognition using Visible Images and Physiological Signals

机译:使用可见图像和生理信号进行面部动作单元识别的多模式深度聚集

获取原文

摘要

In this paper we present a feature aggregation method to combine the information from the visible light domain and the physiological signals for predicting the 12 facial action units in the MMSE dataset. Although multimodal affect analysis has gained lot of attention, the utility of physiological signals in recognizing facial action units is relatively unexplored. In this paper we investigate if physiological signals such as Electro Dermal Activity (EDA), Respiration Rate and Pulse Rate can be used as metadata for action unit recognition. We exploit the effectiveness of deep learning methods to learn an optimal combined representation that is derived from the individual modalities. We obtained an improved performance on MMSE dataset further validating our claim. To the best of our knowledge this is the first study on facial action unit recognition using physiological signals.
机译:在本文中,我们提出了一种特征聚合方法,以将来自可见光域的信息和生理信号组合,以预测MMSE数据集中的12个面部动作单元。虽然多模式影响分析已经受到很多关注,但在识别面部动作单位的生理信号的效用相对未探索。在本文中,我们研究了电力活动(EDA),呼吸速率和脉搏率等生理信号,可用作动作单元识别的元数据。我们利用深度学习方法的有效性来学习源自各种方式的最佳组合表示。我们在MMSE数据集中获得了改进的性能,进一步验证了我们的索赔。据我们所知,这是使用生理信号进行面部动作单位识别的第一次研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号