首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >INTEGRATING PERCEIVERS NEURAL-PERCEPTUAL RESPONSES USING A DEEP VOTING FUSION NETWORK FOR AUTOMATIC VOCAL EMOTION DECODING
【24h】

INTEGRATING PERCEIVERS NEURAL-PERCEPTUAL RESPONSES USING A DEEP VOTING FUSION NETWORK FOR AUTOMATIC VOCAL EMOTION DECODING

机译:使用深度投票融合网络对自动声乐情绪解码的深度投票融合网络集成了神经感知响应

获取原文

摘要

Understanding neuro-perceptual mechanism of vocal emotion perception continues to be an important research direction not only in advancing scientific knowledge but also in inspiring more robust affective computing technologies. The large variabilities in the manifested fMRI signals among subjects has been shown to be due to the effect of individual difference, i.e., inter-subject variability. However, relatively few works have developed modeling techniques in task of automatic neuro-perceptual decoding to handle such idiosyncrasies. In our work, we propose a novel computation method of deep voting fusion neural network architecture by learning an adjusted weight matrix applied at the fusion layer. The framework achieves an unweighted average recall of 53.10% in a four-class vocal emotion states decoding task, i.e., a relative improvement of 8.9% over a two-stage SVM decision-level fusion. Our framework demonstrates its effectiveness in handling individual differences. Further analysis is conducted to study the properties of the learned adjusted weight matrix as a function of emotion classification accuracy.
机译:了解声乐情绪感知的神经感知机制仍然是一个重要的研究方向,不仅是推进科学知识,而且在鼓舞人心的情感的情感计算技术方面也是一个重要的研究方向。主题中的表现为FMRI信号中的巨大可变性已被证明是由于个体差异的效果,即,对象间变异性。然而,相对较少的作品已经开发了自动神经感知解码的任务建模技术以处理这种特质。在我们的工作中,我们提出了一种通过学习在融合层施加的调整重量矩阵来提出深入投票融合神经网络架构的新颖计算方法。该框架在四类声乐情绪状态解码任务中实现了53.10%的53.10%,即在两阶段SVM决策级别融合中的相对提升8.9%。我们的框架在处理个人差异方面表现出其有效性。进行进一步分析以研究学习调整后矩阵的特性作为情绪分类精度的函数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号