...
首页> 外文期刊>Computational intelligence and neuroscience >Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition
【24h】

Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition

机译:面部表情和脑电图的融合

获取原文
获取原文并翻译 | 示例

摘要

This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.
机译:本文提出了两种在脑和外围信号之间的多模式融合方法进行情感识别。输入信号是脑电图和面部表情。刺激基于电影剪辑的子集,该电影剪辑与四个特定的阶段令人讨厌情绪情绪空间(幸福,中性,悲伤和恐惧)相对应。对于面部表情检测,神经网络分类器检测到四种基本情绪状态(幸福,中性,悲伤和恐惧)。对于EEG检测,两个支持向量机(SVM)分类器分别检测到四种基本情绪状态和三个情绪强度水平(强,普通和弱)。情感识别是基于EEG和面部表达检测的两个决策级融合方法,通过使用总和规则或生产规则。 20个健康的科目参加了两个实验。结果表明,两种多峰融合检测的精度分别为81.25%和82.75%,均高于面部表情(74.38%)或脑电图检测(66.88%)。面部表情和情绪识别信息的组合可以补偿他们作为单一信息来源的缺陷。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号