首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >EEG-Based Emotion Recognition by Convolutional Neural Network with Multi-Scale Kernels
【2h】

EEG-Based Emotion Recognition by Convolutional Neural Network with Multi-Scale Kernels

机译:基于EEG的情感识别与多尺度内核的卷积神经网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Besides facial or gesture-based emotion recognition, Electroencephalogram (EEG) data have been drawing attention thanks to their capability in countering the effect of deceptive external expressions of humans, like faces or speeches. Emotion recognition based on EEG signals heavily relies on the features and their delineation, which requires the selection of feature categories converted from the raw signals and types of expressions that could display the intrinsic properties of an individual signal or a group of them. Moreover, the correlation or interaction among channels and frequency bands also contain crucial information for emotional state prediction, and it is commonly disregarded in conventional approaches. Therefore, in our method, the correlation between 32 channels and frequency bands were put into use to enhance the emotion prediction performance. The extracted features chosen from the time domain were arranged into feature-homogeneous matrices, with their positions following the corresponding electrodes placed on the scalp. Based on this 3D representation of EEG signals, the model must have the ability to learn the local and global patterns that describe the short and long-range relations of EEG channels, along with the embedded features. To deal with this problem, we proposed the 2D CNN with different kernel-size of convolutional layers assembled into a convolution block, combining features that were distributed in small and large regions. Ten-fold cross validation was conducted on the DEAP dataset to prove the effectiveness of our approach. We achieved the average accuracies of 98.27% and 98.36% for arousal and valence binary classification, respectively.
机译:除了基于面部或姿态的情感识别之外,由于对抗欺骗性外部表达式的脸部或演讲,脑电图(EEG)数据已经借鉴了他们的能力。基于EEG信号的情感识别严重依赖于特征及其描绘,这需要选择从原始信号转换的特征类别以及可以显示单个信号的内在属性或其中一组的表达式。此外,信道和频带之间的相关性或相互作用还包含用于情绪状态预测的重要信息,并且通常以传统方法忽视。因此,在我们的方法中,投入了32个通道和频带之间的相关性以增强情绪预测性能。从时域选择的提取特征被布置成特征 - 均匀矩阵,其位置在放置在头皮上的相应电极之后。基于EEG信号的3D表示,该模型必须具有了解本地和全局模式的能力,这些模式描述了eEG信道的短程和远程关系以及嵌入功能。为了解决这个问题,我们提出了具有不同于卷积块的不同内核大小的2D CNN,组合分配在小区域和大区域中的功能。在DEAP数据集上进行十倍的交叉验证,以证明我们的方法的有效性。我们分别实现了98.27%的平均准确性,分别为98.27%和98.36%,分别为令人讨厌和价二进制分类。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号