首页> 外文会议>International conference on intelligent virtual agents >Modeling the Semantic Coordination of Speech and Gesture under Cognitive and Linguistic Constraints
【24h】

Modeling the Semantic Coordination of Speech and Gesture under Cognitive and Linguistic Constraints

机译:在认知和语言约束下建模语音和手势的语义协调

获取原文

摘要

This paper addresses the semantic coordination of speech and gesture, a major prerequisite when endowing virtual agents with convincing multimodal behavior. Previous research has focused on building rule- or data-based models specific for a particular language, culture or individual speaker, but without considering the underlying cognitive processes. We present a flexible cognitive model in which both linguistic as well as cognitive constraints are considered in order to simulate natural semantic coordination across speech and gesture. An implementation of this model is presented and first simulation results, compatible with empirical data from the literature are reported.
机译:本文探讨了语音和手势的语义协调,这是赋予虚拟主体以令人信服的多模式行为的主要前提。先前的研究集中在建立针对特定语言,文化或单个说话者的基于规则或数据的模型,但并未考虑潜在的认知过程。我们提出了一种灵活的认知模型,其中考虑了语言和认知方面的限制,以便模拟跨语音和手势的自然语义协调。提出了该模型的实现,并报告了与文献中的经验数据兼容的第一模拟结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号