首页> 外文会议>IEEE International Conference on Systems, Man, and Cybernetics >Micro-expression Video Clip Synthesis Method based on Spatial-temporal Statistical Model and Motion Intensity Evaluation Function
【24h】

Micro-expression Video Clip Synthesis Method based on Spatial-temporal Statistical Model and Motion Intensity Evaluation Function

机译:基于空间统计模型和运动强度评估功能的微表达视频剪辑合成方法

获取原文

摘要

Micro-expression (ME) recognition is an effective method to detect lies and other subtle human emotions. Machine learning-based and deep learning-based models have achieved remarkable results recently. However, these models are vulnerable to overfitting issue due to the scarcity of ME video clips. These videos are much harder to collect and annotate than normal expression video clips, thus limiting the recognition performance improvement. To address this issue, we propose a micro-expression video clip synthesis method based on spatial-temporal statistical and motion intensity evaluation in this paper. In our proposed scheme, we establish a micro-expression spatial and temporal statistical model (MSTSM) by analyzing the dynamic characteristics of micro-expressions and deploy this model to provide the rules for micro-expressions video synthesis. In addition, we design a motion intensity evaluation function (MIEF) to ensure that the intensity of facial expression in the synthesized video clips is consistent with those in real -ME. Finally, facial video clips with MEs of new subjects can be generated by deploying the MIEF together with the widely-used 3D facial morphable model and the rules provided by the MSTSM. The experimental results have demonstrated that the accuracy of micro-expression recognition can be effectively improved by adding the synthesized video clips generated by our proposed method.
机译:微表达(ME)识别是一种检测谎言和其他微妙人类情绪的有效方法。基于机器学习和基于深度学习的模型最近取得了显着的结果。然而,由于我视频剪辑的稀缺性,这些模型容易受到过度装备问题。这些视频比正常表达视频剪辑更难收集和注释,从而限制了识别性能改进。为了解决这个问题,我们提出了一种基于本文的空间统计和运动强度评估的微表达视频剪辑合成方法。在我们提出的计划中,我们通过分析微表达式的动态特性来建立微表达空间和时间统计模型(MSTSM),并部署该模型以提供微表达式视频合成规则。此外,我们设计了运动强度评估功能(Mief),以确保合成视频剪辑中的面部表情强度与真实的人的强度一致。最后,可以通过将MES与广泛使用的3D面部可线模型和MSTSM提供的规则一起部署MES来生成带有新对象的MES的面部视频剪辑。实验结果表明,通过添加由我们所提出的方法产生的合成视频剪辑,可以有效地改善微表达识别的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号