【24h】

Multimodal human emotion/expression recognition

机译:多模式人类情感/表达识别

获取原文

摘要

Recognizing human facial expression and emotion by computer is an interesting and challenging problem. Many have investigated emotional contents in speech alone, or recognition of human facial expressions solely from images. However, relatively little has been done in combining these two modalities for recognizing human emotions. L.C. De Silva et al. (1997) studied human subjects' ability to recognize emotions from viewing video clips of facial expressions and listening to the corresponding emotional speech stimuli. They found that humans recognize some emotions better by audio information, and other emotions better by video. They also proposed an algorithm to integrate both kinds of inputs to mimic human's recognition process. While attempting to implement the algorithm, we encountered difficulties which led us to a different approach. We found these two modalities to be complimentary. By using both, we show it is possible to achieve higher recognition rates than either modality alone.
机译:通过计算机识别人类面部表情和情感是一个有趣和有挑战性的问题。许多人单独调查语音中的情绪内容,或者仅从图像识别人类面部表达。然而,在结合这两个方式以识别人类情绪时,已经相对较少。 L.C. de Silva等人。 (1997)研究人类受试者能够从观看面部表情的视频剪辑和听取相应的情绪语音刺激的能力。他们发现人类通过音频信息和其他情绪更好地认识到一些情绪,并通过视频更好地表达。他们还提出了一种算法,可以集成两种输入来模仿人类的识别过程。在尝试实现算法的同时,我们遇到了导致我们的困难。我们发现这两种方式是免费的。通过使用两者,我们表明可以实现比单独的方式更高的识别率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号