首页> 外文学位 >Towards context-aware gesture enabled user interfaces.
【24h】

Towards context-aware gesture enabled user interfaces.

机译:转向启用上下文感知手势的用户界面。

获取原文
获取原文并翻译 | 示例

摘要

Conventional graphical user interface techniques appear to be ill-suited for the kinds of interactive platforms that are required for future generations of computing devices. 3D graphics and immersive virtual reality applications require interactive 3D object manipulation and navigation. Perceptual user interfaces using speech and gestures are in high demand to provide a more natural human-computer interaction modality. The major challenge facing Perceptual user interfaces is the lack of a standard application programming interfaces capable of handling ambiguity and providing the means to include domain-specific knowledge about the context in which the user interface is used.;We also study the role of context in gesture interpretation without making assumptions about a specific application. We view the hand-tracking and gesture-recognition subsystems as integral parts of a larger distributed and multi-user multi-service application, where gesture interpretation plays the role of resolving ambiguity of the recognized gesture. We identify the relevant aspects to hand gesture interpretation and we propose agent-based system architecture for gesture interpretation.;We finally propose a framework for gesture-enabled system design, where context is placed in a middleware layer that interfaces with all sub modules in the system and plays a dialectic role and keeping the overall system stable.;In this dissertation, we study dynamic hand gestures, which are defined as a sequence of hand postures. We emphasize the generality of our dynamic gesture model, which is capable of recognizing essentially any dynamic hand gesture confined in a sequence of postures. Hand postures are static poses and are defined by an array of posture attributes. We use a generic definition hand postures capable of covering the space of hand postures at different levels of granularity and abstraction; and we timely monitor the posture variation as it unfolds within the dynamic gesture.
机译:传统的图形用户界面技术似乎不适用于下一代计算设备所需的那种交互式平台。 3D图形和沉浸式虚拟现实应用程序需要交互式3D对象操纵和导航。迫切需要使用语音和手势的感知用户界面来提供更自然的人机交互方式。感知用户界面面临的主要挑战是缺乏标准的应用程序编程接口,该接口能够处理歧义并提供手段来包括有关使用用户界面的上下文的领域特定知识。手势解释而无需对特定应用程序做任何假设。我们将手部跟踪和手势识别子系统视为较大的分布式和多用户多服务应用程序的组成部分,其中手势解释扮演着解决识别手势歧义的角色。我们确定了手势解释的相关方面,并提出了用于手势解释的基于代理的系统体系结构。最后,我们提出了一种用于支持手势的系统设计框架,其中,上下文位于与各子模块接口的中间件层中。 ;在系统中起着辩证作用,并保持整个系统的稳定性。本文研究动态手势,将其定义为一系列手势。我们强调了动态手势模型的通用性,该模型能够识别基本上局限于一系列姿势的任何动态手势。手势是静态姿势,由一系列姿势属性定义。我们使用通用定义的手势,该手势能够覆盖不同粒度和抽象级别的手势空间;而且我们会及时监测动态手势中不断变化的姿势变化。

著录项

  • 作者

    El-Sawah, Ayman.;

  • 作者单位

    University of Ottawa (Canada).;

  • 授予单位 University of Ottawa (Canada).;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2008
  • 页码 142 p.
  • 总页数 142
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号