首页> 外文会议>IFIP Conference on Artificial Intelligence Applications Innovations >Multimodal emotion recognition from expressive faces, body gestures and speech
【24h】

Multimodal emotion recognition from expressive faces, body gestures and speech

机译:从富有表现力,身体手势和言语的多式联播情感识别

获取原文

摘要

In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a Bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality. Then data were fused at the feature level and the decision level. Fusing multimodal data increased very much the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% with respect to the most successful unimodal system. Further, the fusion performed at the feature level showed better results than the one performed at the decision level.
机译:在本文中,我们提出了一种识别八种情绪的多模式方法,可以将信息与面部表情,身体运动和手势和言语集成。我们用贝叶斯分类器训练并测试了一个模型,使用具有八个情绪和十个科目的多模式语料库。首先为每种方式培训单独的分类器。然后数据在特征级别和决策级别融合。与单峰系统相比,融合多媒体数据增加了识别率:多式联运方法对最成功的单峰系统产生了超过10%的提高。此外,在特征级别执行的融合显示出比在决策级别所执行的那个更好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号