首页> 外文期刊>International journal of human-computer studies >Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation
【24h】

Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation

机译:了解使用凝视的多模态交互的影响信息3D虚拟物体操纵中的中空手势控制

获取原文
获取原文并翻译 | 示例
       

摘要

Multimodal interactions provide users with more natural ways to manipulate virtual 3D objects than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform object selection and manipulation in a virtual space conveniently through the use of a combination of gaze and other interaction techniques (e.g., mid-air gestures). As gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on the user's perception on the exact spatial mapping between the virtual space and the physical space. An underexplored issue is, when the spatial mapping differs with the user's perception, manipulation errors (e.g., out of boundary errors, proximity errors) may occur. Therefore, in gaze modulated pointing, as gaze can introduce misalignment of the spatial mapping, it may lead to user's misperception of the virtual environment and consequently manipulation errors. This paper provides a clear definition of the problem through a thorough investigation on its causes and specifies the conditions when it occurs, which is further validated in the experiment. It also proposes three methods (Scaling, Magnet and Dual-gaze) to address the problem and examines them using a comparative study which involves 20 participants with 1040 runs. The results show that all three methods improved the manipulation performance with regard to the defined problem where Magnet and Dual-gaze delivered better performance than Scaling. This finding could be used to inform a more robust multimodal interface design supported by both eye tracking and mid-air gesture control without losing efficiency and stability.
机译:多模式交互为用户提供更自然的方式来操纵虚拟3D对象而不是使用传统的输入方法。新出现的方法是凝视调制指向,其使用户能够通过使用凝视和其他相互作用技术(例如,中空手势)的组合来方便地在虚拟空间中执行对象选择和操纵。由于凝视调制指向使用不同的传感器来跟踪和检测用户行为,其性能依赖于用户对虚拟空间和物理空间之间的精确空间映射的感知。当空间映射与用户的感知不同时,可能发生操作错误(例如,超出边界误差,接近误差)时,不足的问题。因此,在凝视调制指向时,由于凝视可以引入空间映射的未对准,它可能导致用户对虚拟环境的误操作并且因此操纵误差。本文通过对其原因进行彻底调查,提供了清晰的问题清晰定义,并在实验中进一步验证了其发生时的条件。它还提出了三种方法(缩放,磁铁和双凝版)来解决问题并使用比较研究检查它们,涉及20名参与者的比较研究。结果表明,所有三种方法都改善了在磁体和双凝视的定义问题方面提高了操纵性能,而不是比缩放更好的性能。该发现可用于通知眼镜跟踪和中空手势控制支持的更强大的多模型界面设计,而不会降低效率和稳定性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号