首页> 外文期刊>ACM Transactions on Applied Perception (TAP) >Eye Tracking Interaction on Unmodified Mobile VR Headsets Using the Selfie Camera
【24h】

Eye Tracking Interaction on Unmodified Mobile VR Headsets Using the Selfie Camera

机译:使用Selfie相机对未经修改的移动VR耳机上的眼睛跟踪交互

获取原文
获取原文并翻译 | 示例

摘要

Input methods for interaction in smartphone-based virtual and mixed reality (VR/MR) are currently based on uncomfortable head tracking controlling a pointer on the screen. User fixations are a fast and natural input method for VR/MR interaction. Previously, eye tracking in mobile VR suffered from low accuracy, long processing time, and the need for hardware add-ons such as anti-reflective lens coating and infrared emitters. We present an innovative mobile VR eye tracking methodology utilizing only the eye images from the front-facing (selfie) camera through the headset's lens, without any modifications. Our system first enhances the low-contrast, poorly lit eye images by applying a pipeline of customised low-level image enhancements suppressing obtrusive lens reflections. We then propose an iris region-of-interest detection algorithm that is run only once. This increases the iris tracking speed by reducing the iris search space in mobile devices. We iteratively fit a customised geometric model to the iris to refine its coordinates. We display a thin bezel of light at the top edge of the screen for constant illumination. A confidence metric calculates the probability of successful iris detection. Calibration and linear gaze mapping between the estimated iris centroid and physical pixels on the screen results in low latency, real-time iris tracking. A formal study confirmed that our system's accuracy is similar to eye trackers in commercial VR headsets in the central part of the headset's field-of-view. In a VR game, gaze-driven user completion time was as fast as with head-tracked interaction, without the need for consecutive head motions. In a VR panorama viewer, users could successfully switch between panoramas using gaze.
机译:基于智能手机的虚拟和混合现实(VR / MR)中的交互的输入方法目前基于控制屏幕上指针的不舒服的头部跟踪。用户固定是VR / MR交互的快速自然输入方法。以前,移动VR中的眼睛跟踪遭受了低精度,长处理时间,以及硬件加载项,如抗反射镜片涂层和红外发射器。我们在没有任何修改的情况下,我们介绍了一种创新的移动VR眼跟踪方法,仅利用来自前置(自拍照)相机的眼睛图像,没有任何修改。我们的系统首先通过应用定制的低级图像增强管道抑制突起透镜反射,增强了低对比度,眼睛图像的低对比度。然后,我们提出了一种虹膜区域的兴趣区域,仅运行一次。这通过减少移动设备中的虹膜搜索空间来增加虹膜跟踪速度。我们迭代地将定制的几何模型适合虹膜以改进其坐标。我们在屏幕顶部展示薄挡板光以进行恒定照明。置信度指标计算成功虹膜检测的概率。校准和线性凝视在屏幕上估计的IRIS质心和物理像素之间的映射导致低延迟,实时虹膜跟踪。正式的研究证实,我们的系统的准确性类似于耳机视野的中央部分的商业VR耳机中的眼跟踪器。在VR游戏中,凝视驱动的用户完成时间与头部跟踪交互一样快,而不需要连续的头部运动。在VR Panorama Viewer中,用户可以使用Gaze成功在Panorama之间切换。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号