首页> 外文会议>Games and learning alliance >FILTWAM and Voice Emotion Recognition
【24h】

FILTWAM and Voice Emotion Recognition

机译:FILTWAM和语音情感识别

获取原文
获取原文并翻译 | 示例

摘要

This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone data for a real-time and adequate interpretation of vocal expressions into emotional states were the software is calibrated with end users. FILTWAM already incorporates a valid face emotion recognition module and is extended with a voice emotion recognition module. This extension aims to provide relevant and timely feedback based upon learner's vocal intonations. The feedback is expected to enhance learner's awareness of his or her own behavior. Six test persons received the same computer-based tasks in which they were requested to mimic specific vocal expressions. Each test person mimicked 82 emotions, which led to a dataset of 492 emotions. All sessions were recorded on video. An overall accuracy of our software based on the requested emotions and the recognized emotions is a pretty good 74.6 % for the emotions happy and neutral emotions; but will be improved for the lower values of an extended set of emotions. In contrast with existing software our solution allows to continuously and unobtrusively monitor learners' intonations and convert these intonations into emotional states. This paves the way for enhancing the quality and efficacy of game-based learning by including the learner's emotional states, and links these to pedagogical scaffolding.
机译:本文介绍了语音情感识别部分,该框架是通过网络摄像头和麦克风(FILTWAM)改善学习的框架。该框架可以在基于游戏的学习过程中对学习者进行多模式情感识别。这项研究的主要目的是验证麦克风数据的使用,以便实时,正确地将语音表达解释为情绪状态,前提是该软件已与最终用户进行了校准。 FILTWAM已经合并了有效的面部表情识别模块,并通过语音表情识别模块进行了扩展。此扩展旨在根据学习者的语音语调提供相关的及时反馈。该反馈有望增强学习者对自己行为的认识。六名测试人员接受了相同的基于计算机的任务,要求他们模仿特定的语音表达。每个测试人员模拟了82种情绪,从而得出492种情绪的数据集。所有会话均记录在视频中。基于所请求的情绪和所识别的情绪,我们的软件的整体准确性对于快乐和中立情绪而言是74.6%;但会因扩展的情感的较低值而有所改善。与现有软件相比,我们的解决方案允许连续且毫不干扰地监视学习者的语调并将这些语调转换为情感状态。这通过包括学习者的情绪状态,为将基于游戏的学习的质量和功效铺平了道路,并将这些与教学支架相联系。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号