【24h】

Explicit Task Representation based on Gesture Interaction

机译:基于手势交互的显式任务表示

获取原文
获取原文并翻译 | 示例

摘要

This paper describes the role and the use of an explicit task representation in applications where humans interact in non-traditional computer environments using gestures. The focus lies on training and assistance applications, where the objective of the training includes implicit knowledge, e.g., motor-skills. On the one hand, these applications require a clear and transparent description of what has to be done during the interaction, while, on the other hand, they are highly interactive and multimodal. Therefore, the human computer interaction becomes modelled from the top down as a collaboration in which each participant pursues their individual goal that is stipulated by a task. In a bottom up processing, gesture recognition determines the actions of the user by applying processing on the continuous data streams from the environment. The resulting gesture or action is interpreted as the user's intention and becomes evaluated during the collaboration, allowing the system to reason about how to best provide guidance at this point. A vertical prototype based on the combination of a haptic virtual environment and a knowledge-based reasoning system is discussed and the evolvement of the task-based collaboration becomes demonstrated.
机译:本文介绍了在非传统计算机环境中使用手势进行人机交互的应用程序中,显式任务表示的作用和使用。重点在于培训和辅助应用程序,其中培训的目标包括隐性知识,例如运动技能。一方面,这些应用程序要求对交互过程中必须执行的操作进行清晰,透明的描述,而另一方面,它们是高度交互的和多模式的。因此,人机交互从上而下建模为一种协作,其中每个参与者都追求任务规定的个人目标。在自下而上的处理中,手势识别通过对来自环境的连续数据流进行处理来确定用户的操作。所得到的手势或动作将被解释为用户的意图,并在协作过程中得到评估,从而使系统可以就此时如何最好地提供指导进行推理。讨论了基于触觉虚拟环境和基于知识的推理系统的组合的垂直原型,并演示了基于任务的协作的发展。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号