首页> 外文会议>9th International conference on language resources and evaluation >Comparison of Gender- and Speaker-adaptive Emotion Recognition
【24h】

Comparison of Gender- and Speaker-adaptive Emotion Recognition

机译:性别与扬声器 - 适应性情感认可的比较

获取原文

摘要

Deriving the emotion of a human speaker is a hard task, especially if only the audio stream is taken into account. While state-of-the-art approaches already provide good results, adaptive methods have been proposed in order to further improve the recognition accuracy. A recent approach is to add characteristics of the speaker, e.g., the gender of the speaker. In this contribution, we argue that adding information unique for each speaker, i.e., by using speaker identification techniques, improves emotion recognition simply by adding this additional information to the feature vector of the statistical classification algorithm. Moreover, we compare this approach to emotion recognition adding only the speaker gender being a non-unique speaker attribute. We justify this by performing adaptive emotion recognition using both gender and speaker information on four different corpora of different languages containing acted and non-acted speech. The final results show that adding speaker information significantly outperforms both adding gender information and solely using a generic speaker-independent approach.
机译:导出人类扬声器的情绪是一项艰巨的任务,特别是如果仅考虑音频流。虽然最先进的方法已经提供了良好的结果,但已经提出了自适应方法,以进一步提高识别准确性。最近的方法是增加扬声器的特征,例如扬声器的性别。在这一贡献中,我们认为,通过使用扬声器识别技术,为每个扬声器提供独特的信息,即通过将该附加信息添加到统计分类算法的特征向量来改善情绪识别。此外,我们将这种方法与情感认可的方法相比,只添加了扬声器性别是一个非唯一扬声器属性。我们通过在包含契据和非行为言论的不同语言的四个不同语言的四个不同语言中表现适应性的情感认可来证明这一点。最终结果表明,添加扬声器信息显着优于添加性别信息,并仅使用通用扬声器无关的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号