首页> 外文期刊>IEEE transactions on visualization and computer graphics >Effects of Depth Information on Visual Target Identification Task Performance in Shared Gaze Environments
【24h】

Effects of Depth Information on Visual Target Identification Task Performance in Shared Gaze Environments

机译:深度信息对共享凝视环境中的视觉目标识别任务性能的影响

获取原文
获取原文并翻译 | 示例
       

摘要

Human gaze awareness is important for social and collaborative interactions. Recent technological advances in augmented reality (AR) displays and sensors provide us with the means to extend collaborative spaces with real-time dynamic AR indicators of one's gaze, for example via three-dimensional cursors or rays emanating from a partner's head. However, such gaze cues are only as useful as the quality of the underlying gaze estimation and the accuracy of the display mechanism. Depending on the type of the visualization, and the characteristics of the errors, AR gaze cues could either enhance or interfere with collaborations. In this paper, we present two human-subject studies in which we investigate the influence of angular and depth errors, target distance, and the type of gaze visualization on participants' performance and subjective evaluation during a collaborative task with a virtual human partner, where participants identified targets within a dynamically walking crowd. First, our results show that there is a significant difference in performance for the two gaze visualizations ray and cursor in conditions with simulated angular and depth errors: the ray visualization provided significantly faster response times and fewer errors compared to the cursor visualization. Second, our results show that under optimal conditions, among four different gaze visualization methods, a ray without depth information provides the worst performance and is rated lowest, while a combination of a ray and cursor with depth information is rated highest. We discuss the subjective and objective performance thresholds and provide guidelines for practitioners in this field.
机译:人类凝视意识对社会和协作互动非常重要。增强现实(AR)的最近技术进步(AR)显示器和传感器为我们提供了将协作空间与一个人的凝视的实时动态AR指示器延伸,例如通过伴侣的头部发出的三维光标或光线。然而,这种凝视线索仅用作潜在的凝视估计的质量和显示机制的准确性。根据可视化的类型,以及误差的特征,AR凝视提示可以增强或干扰合作。在本文中,我们提出了两个人类主题研究,其中我们调查了角度和深度误差,目标距离和凝视可视化类型对与虚拟人伴侣的协作任务期间参与者的性能和主观评价的影响参与者在动态行走的人群中确定了目标。首先,我们的结果表明,在模拟角度和深度误差的条件下,两个凝视可视化射线和光标的性能有显着差异:与光标可视化相比,光线可视化提供了更快的响应时间和更少的误差。其次,我们的结果表明,在最佳条件下,在四种不同的凝视可视化方法中,没有深度信息的射线提供最差的性能并且额定最低,而光线和光标与深度信息的组合是最高的。我们讨论主观和客观性能门槛,并为该领域的从业者提供指导方针。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号