首页> 外文会议>International conference on artificial intelligence >An Abstract Model of Multimodal Fusion using Fuzzy Sets to Derive Interactive Emotions
【24h】

An Abstract Model of Multimodal Fusion using Fuzzy Sets to Derive Interactive Emotions

机译:使用模糊集推导交互式情感的多峰融合抽象模型

获取原文

摘要

Recently, social robotics applications have attracted attention, as they address profound life demands. These intelligent robotic applications should be able to perceive, recognize, and respond to human emotional states. Humans express their emotions both verbally through speech and silence and nonverbally through facial expressions and gestures. Although there are several approaches that have been proposed to recognize a limited amount of human emotion based on one modality, limited ad hoc works have been done to integrate and fuse several modalities. In this paper, we propose an abstract model of multimodal fusion of facial expressions, speech and gestures. An abstract semantic model is presented identifying emotions that involves set-theoretic operations and functional mapping. A semantic algebra has been described.
机译:近来,社交机器人技术应用解决了深刻的生活需求,引起了人们的关注。这些智能机器人应用程序应该能够感知,识别并响应人类的情绪状态。人类通过言语和沉默以口头表达情绪,而通过面部表情和手势以非言语表达情绪。尽管已经提出了几种方法来基于一种模态识别有限量的人类情感,但是已经进行了有限的临时工作来整合和融合几种模态。在本文中,我们提出了面部表情,语音和手势的多模式融合的抽象模型。提出了一种抽象的语义模型,用于识别涉及集合理论操作和功能映射的情绪。已经描述了语义代数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号