首页> 外文会议>IWCS workshop on foundations of situated and multimodal communication >Creating Common Ground through Multimodal Simulations
【24h】

Creating Common Ground through Multimodal Simulations

机译:通过多模式模拟创建共同点

获取原文
获取外文期刊封面目录资料

摘要

The demand for more sophisticated human-computer interactions is rapidly increasing, as users become more accustomed to conversation-like interactions with their devices. In this paper, we examine this changing landscape in the context of human-machine interaction in a shared workspace to achieve a common goal. In our prototype system, people and avatars cooperate to build blocks world structures through the interaction of language, gesture, vision, and action. This provides a platform to study computational issues involved in multimodal communication. In order to establish elements of the common ground in discourse between speakers, we have created an embodied 3D simulation, enabling both the generation and interpretation of multiple modalities, including: language, gesture, and the visualization of objects moving and agents acting in their environment. The simulation is built on the modeling language VoxML, that encodes objects with rich semantic typing and action affordances, and actions themselves as multimodal programs, enabling contextually salient inferences and decisions in the environment. We illustrate this with a walk-through of multimodal communication in a shared task.
机译:对更复杂的人机交互的需求迅速增加,因为用户更加习惯于与其设备的交互相互作用。在本文中,我们在共享工作区中的人机交互的背景下检查这种变化的景观,以实现共同目标。在我们的原型系统中,人员和头像通过语言,手势,愿景和行动的互动,合作建立块世界结构。这提供了一种研究多模式通信所涉及的计算问题的平台。为了在扬声器之间的话语中建立共同点的元素,我们创建了一个体现的3D模拟,从而实现了多种方式的生成和解释,包括:语言,手势以及在其环境中起作用的物体移动和代理的可视化。模拟建立在建模语言VOXML上,该模拟语言voxml编码具有丰富语义打字和行动的对象,以及操作本身作为多模式计划,在环境中启用上下文突出的推断和决策。我们用共享任务中的多模式通信来说明这一点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号