首页> 外文会议>NICTA-HCSNet Multimodal User Interaction Workshop >Explicit Task Representation based on Gesture Interaction
【24h】

Explicit Task Representation based on Gesture Interaction

机译:基于手势交互的显式任务表示

获取原文

摘要

This paper describes the role and the use of an explicit task representation in applications where humans interact in non-traditional computer environments using gestures. The focus lies on training and assistance applications, where the objective of the training includes implicit knowledge, e.g., motor-skills. On the one hand, these applications require a clear and transparent description of what has to be done during the interaction, while, on the other hand, they are highly interactive and multimodal. Therefore, the human computer interaction becomes modelled from the top down as a collaboration in which each participant pursues their individual goal that is stipulated by a task. In a bottom up processing, gesture recognition determines the actions of the user by applying processing on the continuous data streams from the environment. The resulting gesture or action is interpreted as the user's intention and becomes evaluated during the collaboration, allowing the system to reason about how to best provide guidance at this point. A vertical prototype based on the combination of a haptic virtual environment and a knowledge-based reasoning system is discussed and the evolvement of the task-based collaboration becomes demonstrated.
机译:本文介绍了使用手势在非传统计算机环境中互动的应用中的显式任务表示的作用和使用。重点在于培训和援助应用,培训的目标包括隐含知识,例如电动技能。一方面,这些应用需要清晰透明地描述在互动期间必须进行的内容,而另一方面,它们是高度交互和多模式。因此,人机互动从顶部作为一个合作变得建模,其中每个参与者追求其单独的目标是由任务规定的。在自下而上处理中,手势识别通过在来自环境中的连续数据流上应用处理来确定用户的动作。由此产生的手势或动作被解释为用户的意图,并在协作期间进行评估,允许系统在此目前最佳提供指导。讨论了基于触觉虚拟环境的组合和基于知识的推理系统的垂直原型,并证明了基于任务的协作的演变。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号