首页> 外文会议>Proceedings of the 1st international conference on Computer graphics, virtual reality and visualisation >A gesture processing framework for multimodal interaction in virtual reality
【24h】

A gesture processing framework for multimodal interaction in virtual reality

机译:虚拟现实中多模式交互的手势处理框架

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

This article presents a gesture detection and analysis framework for modelling multimodal interactions. It is particulary designed for its use in Virtual Reality (VR) applications and contains an abstraction layer for different sensor hardware. Using the framework, gestures are described by their characteristic spatio-temporal features which are on the lowest level calculated by simple predefined detector modules or nodes. These nodes can be connected by a data routing mechanism to perform more elaborate evaluation functions, therewith establishing complex detector nets. Typical problems that arise from the time-dependent invalidation of multimodal utterances under immersive conditions lead to the development of pre-evaluation concepts that as well support their integration into scene graph based systems to support traversal-type access. Examples of realized interactions illustrate applications which make use of the described concepts.
机译:本文介绍了用于建模多模式交互的手势检测和分析框架。它是专门为在虚拟现实(VR)应用程序中使用而设计的,并且包含用于不同传感器硬件的抽象层。使用该框架,可以通过手势的时空特征描述手势,手势的时空特征是由简单的预定义检测器模块或节点计算出的最低水平。这些节点可以通过数据路由机制连接起来,以执行更精细的评估功能,从而建立复杂的检测器网络。在身临其境的条件下,由于多模态言语随时间而失效的典型问题,导致了预评估概念的发展,这些概念也支持将其集成到基于场景图的系统中以支持遍历类型的访问。已实现的交互的示例说明了利用所描述概念的应用程序。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号