首页> 外文会议>International Conference on Information Systems and Development: Methods and Tools, Theory and Practice >Sound Processing Features for Speaker-Dependent and Phrase-Independent Emotion Recognition in Berlin Database
【24h】

Sound Processing Features for Speaker-Dependent and Phrase-Independent Emotion Recognition in Berlin Database

机译:讲话者依赖的声音处理功能和柏林数据库的独立情感认同

获取原文

摘要

An emotion recognition framework based on sound processing could improve services in human–computer interaction. Various quantitative speech features obtained from sound processing of acting speech were tested, as to whether they are sufficient or not to discriminate between seven emotions. Multilayered perceptrons were trained to classify gender and emotions on the basis of a 24-input vector, which provide information about the prosody of the speaker over the entire sentence using statistics of sound features. Several experiments were performed and the results were presented analytically. Emotion recognition was successful when speakers and utterances were "known" to the classifier. However, severe misclassifications occurred during the utterance-independent framework. At least, the proposed feature vector achieved promising results for utterance-independent recognition of high- and low-arousal emotions.
机译:基于声音处理的情感识别框架可以改善人机交互中的服务。测试了从作业语音的声音处理获得的各种定量语音特征,以及它们是否足以区分七种情绪。培训多层的感知者以根据24输入向量对性别和情绪进行审查,这些向量提供有关使用声音统计数据的整个句子在整个句子上提供有关扬声器硕士学位的信息。进行了几个实验,并分析出现了结果。当发言者和话语“已知”到分类器时,情绪认可是成功的。但是,在话语独立的框架期间发生严重错误分类。至少,拟议的特征向量实现了对令人独立的高唤起情绪的独立识别的有希望的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号