首页> 外文会议>IEEE International Colloquium on Signal Processing Its Applications >Subject-Independent Emotion Recognition During Music Listening Based on EEG Using Deep Convolutional Neural Networks
【24h】

Subject-Independent Emotion Recognition During Music Listening Based on EEG Using Deep Convolutional Neural Networks

机译:基于深度卷积神经网络的基于脑电图的音乐聆听中与主题无关的情绪识别

获取原文

摘要

Emotion recognition during music listening using electroencephalogram (EEG) has gained more attention from researchers, recently. Many studies focused on accuracy on one subject while subject-independent performance evaluation was still unclear. In this paper, the objective is to create an emotion recognition model that can be applied to multiple subjects. By adopting convolutional neural networks (CNNs), advantage could be gained from utilizing information from electrodes and time steps. Using CNNs also does not need feature extraction which might leave out other related but unobserved features. CNNs with three to seven convolutional layers were deployed in this research. We measured their performance with a binary classification task for compositions of emotions including arousal and valence. The results showed that our method captured EEG signal patterns from numerous subjects by 10-fold cross validation with 81.54% and 86.87% accuracy from arousal and valence respectively. The method also showed a higher capability of generalization to unseen subjects than the previous method as can be observed from the results of leave-one-subject-out validation.
机译:最近,使用脑电图(EEG)进行音乐收听时的情绪识别已引起研究人员的更多关注。许多研究集中于一个主题的准确性,而与主题无关的绩效评估仍不清楚。在本文中,目标是创建可应用于多个主题的情绪识别模型。通过采用卷积神经网络(CNN),可以利用电极和时间步长信息来获得优势。使用CNN并不需要特征提取,这可能会遗漏其他相关但未发现的特征。这项研究部署了具有三到七个卷积层的CNN。我们用二元分类任务测量了他们的表现,以评估情绪的组成,包括唤醒和化合价。结果表明,我们的方法通过1​​0倍交叉验证捕获了来自众多受试者的EEG信号模式,分别从唤醒和化合价中获得了81.54%和86.87%的准确度。从遗忘一测验验证的结果可以看出,该方法还显示出比以前的方法具有更高的泛化能力,可以将看不见的对象进行泛化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号