首页> 美国卫生研究院文献>Frontiers in Neuroscience >Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening
【2h】

Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening

机译:脑电动力学和音乐内容的融合用于估计音乐聆听中的情感反应

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61–67% in valence classification and from around 58–67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
机译:由于在音乐情感脑机接口(ABCI),神经营销,音乐疗法以及隐式多媒体标记和触发等潜在应用中的应用前景,如今在音乐聆听中基于脑电图(EEG)的情感分类得到了越来越多的关注。然而,音乐是一种生态学上有效且复杂的刺激,它通过音乐元素的构成向听众传达某些情感。仅使用脑电信号来区分情绪仍然具有挑战性。这项研究旨在通过利用脑电图动力学和音乐内容的声学特征来评估情绪价和唤醒的分类,从而评估多模式方法的适用性。为此,本研究采用机器学习方法来系统地阐明脑电图和音乐模式在情感建模中的作用。实验结果表明,当可以使用全脑电图信号时,包含音乐内容并不能改善分类性能。在统计上,仅使用EEG方式获得的性能为74%至76%,与使用多式联运方法获得的效果在统计学上可比。但是,如果只能从一小部分电极上获得脑电图动力学(在现实生活中可能就是这种情况),则音乐形式将起到补充作用,并在价数分类中从约61%到67%以及约唤醒分类为58–67%。音乐木似乎取代了较少区分的脑电图特征,并导致了价数和唤醒分类的改善,而音乐响度则专门为唤醒分类做出了贡献。本研究不仅为构建基于脑电图的多峰方法提供了原则,而且还揭示了情感建模中大脑活动与音乐内容之间相互作用的基本见解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号