首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >Bhattacharyya distance based emotional dissimilarity measure for emotion classification
【24h】

Bhattacharyya distance based emotional dissimilarity measure for emotion classification

机译:基于Bhattacharyya距离的情感差异度量用于情感分类。

获取原文

摘要

Speech is one of the most important signals that can be used to detect human emotions. When speech is modulated by different emotions, spectral distribution of speech is changed accordingly. A Gaussian Mixture Model(GMM) can model the changes in spectral distributions effectively. A GMM-supervector characterizes the spectral distribution of an emotion utterance by the GMM parameters such as the mean vectors and covariance matrices. In this paper, we propose to use the GMM-supervectors that characterize the emotional spectral dissimilarity measure for emotion classification. We employ the GMM-SVM kernel with Bhattacharyya based GMM distance to obtain dissimilarity measure. Beside the first-order statistics of mean, we consider dissimilarity measure using second-order statistics of covariance which describe the shape of the distribution. Experiments are conducted using SVM classifier to classify emotions of anger, happiness, neutral and sadness. We achieve average accuracy of 78.14% for speaker independent emotion classification.
机译:语音是可用于检测人类情绪的最重要信号之一。当语音通过不同的情绪进行调制时,语音的频谱分布会相应地发生变化。高斯混合模型可以有效地模拟光谱分布的变化。 GMM超向量通过GMM参数(例如均值向量和协方差矩阵)来表征情绪发声的频谱分布。在本文中,我们建议使用GMM-supervectors来表征情绪频谱不相似性度量,以进行情绪分类。我们将GMM-SVM内核与基于Bhattacharyya的GMM距离结合使用,以获取相异度度量。除了均值的一阶统计量外,我们还考虑了使用描述协方差的二阶统计量来描述分布形状的不相似性度量。使用SVM分类器进行实验,对愤怒,幸福,中立和悲伤的情绪进行分类。对于说话者独立的情感分类,我们达到78.14%的平均准确率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号