首页> 外文期刊>Expert Systems with Application >Multimodal emotion recognition with evolutionary computation for human-robot interaction
【24h】

Multimodal emotion recognition with evolutionary computation for human-robot interaction

机译:人机交互的进化计算多模态情感识别

获取原文
获取原文并翻译 | 示例

摘要

Service robotics is an important field of research for the development of assistive technologies. Particularly, humanoid robots will play an increasing and important role in our society. More natural assistive interaction with humanoid robots can be achieved if the emotional aspect is considered. However emotion recognition is one of the most challenging topics in pattern recognition and improved intelligent techniques have to be developed to accomplish this goal. Recent research has addressed the emotion recognition problem with techniques such as Artificial Neural Networks (ANNs)/Hidden Markov Models (HMMs) and reliability of proposed approaches has been assessed (in most cases) with standard databases. In this work we (1) explored on the implications of using standard databases for assessment of emotion recognition techniques, (2) extended on the evolutionary optimization of ANNs and HMMs for the development of a multimodal emotion recognition system, (3) set the guidelines for the development of emotional databases of speech and facial expressions, (4) rules were set for phonetic transcription of Mexican speech, and (5) evaluated the suitability of the multimodal system within the context of spoken dialogue between a humanoid robot and human users. The development of intelligent systems for emotion recognition can be improved by the findings of the present work: (a) emotion recognition depends on the structure of the database sub-sets used for training and testing, and it also depends on the type of technique used for recognition where a specific emotion can be highly recognized by a specific technique, (b) optimization of HMMs led to a Bakis structure which is more suitable for acoustic modeling of emotion-specific vowels while optimization of ANNs led to a more suitable ANN structure for recognition of facial expressions, (c) some emotions can be better recognized based on speech patterns instead of visual patterns, and (d) the weighted integration of the multimodal emotion recognition system optimized with these observations can achieve a recognition rate up to 97.00 % in live dialogue tests with a humanoid robot. (C) 2016 Elsevier Ltd. All rights reserved.
机译:服务机器人技术是辅助技术发展的重要研究领域。特别是类人机器人将在我们的社会中发挥越来越重要的作用。如果考虑到情感方面,则可以与人形机器人进行更自然的辅助交互。然而,情感识别是模式识别中最具挑战性的主题之一,必须开发改进的智能技术来实现这一目标。最近的研究已经使用诸如人工神经网络(ANN)/隐马尔可夫模型(HMM)之类的技术解决了情绪识别问题,并且已通过标准数据库评估了所提出方法的可靠性(大多数情况下)。在这项工作中,我们(1)探索了使用标准数据库评估情绪识别技术的意义;(2)扩展了ANN和HMM的进化优化,以开发多模式情绪识别系统;(3)制定了准则为了开发语音和面部表情的情感数据库,(4)设置了墨西哥语音的语音转录规则,(5)在人形机器人与人类用户之间的口语对话中评估了多模式系统的适用性。当前工作的发现可以改善用于情感识别的智能系统的开发:(a)情感识别取决于用于训练和测试的数据库子集的结构,并且还取决于所使用的技术类型为了识别可以通过特定技术高度识别特定情感的(b)HMM的优化导致了Bakis结构,该结构更适合于情感特定元音的声学建模,而ANN的优化则导致了更适合的ANN结构识别面部表情,(c)可以基于语音模式而不是视觉模式更好地识别某些情感,并且(d)通过这些观察而优化的多模式情感识别系统的加权集成可以在识别时达到高达97.00%的识别率用人形机器人进行实时对话测试。 (C)2016 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号