首页> 外文OA文献 >Faster R-CNN and Geometric Transformation-Based Detection of Driver’s Eyes Using Multiple Near-Infrared Camera Sensors
【2h】

Faster R-CNN and Geometric Transformation-Based Detection of Driver’s Eyes Using Multiple Near-Infrared Camera Sensors

机译:使用多个近红外相机传感器更快的R-CNN和基于几何变换的驾驶员眼睛检测

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Studies are being actively conducted on camera-based driver gaze tracking in a vehicle environment for vehicle interfaces and analyzing forward attention for judging driver inattention. In existing studies on the single-camera-based method, there are frequent situations in which the eye information necessary for gaze tracking cannot be observed well in the camera input image owing to the turning of the driver’s head during driving. To solve this problem, existing studies have used multiple-camera-based methods to obtain images to track the driver’s gaze. However, this method has the drawback of an excessive computation process and processing time, as it involves detecting the eyes and extracting the features of all images obtained from multiple cameras. This makes it difficult to implement it in an actual vehicle environment. To solve these limitations of existing studies, this study proposes a method that uses a shallow convolutional neural network (CNN) for the images of the driver’s face acquired from two cameras to adaptively select camera images more suitable for detecting eye position; faster R-CNN is applied to the selected driver images, and after the driver’s eyes are detected, the eye positions of the camera image of the other side are mapped through a geometric transformation matrix. Experiments were conducted using the self-built Dongguk Dual Camera-based Driver Database (DDCD-DB1) including the images of 26 participants acquired from inside a vehicle and the Columbia Gaze Data Set (CAVE-DB) open database. The results confirmed that the performance of the proposed method is superior to those of the existing methods.
机译:在车辆界面的车辆环境中正在积极地在基于相机的驾驶员凝视跟踪中进行研究,并分析判断司机疏忽的前瞻性关注。在对基于单相机的方法的现有研究中,存在频繁的情况,其中在驾驶期间驾驶员头部的转动时,在相机输入图像中不能很好地观察到凝视跟踪所需的眼睛信息。为了解决这个问题,现有的研究已经使用了基于多个相机的方法来获得图像以跟踪驾驶员的目光。然而,该方法具有过多的计算过程和处理时间的缺点,因为它涉及检测眼睛并提取从多个摄像机获得的所有图像的特征。这使得难以在实际的车辆环境中实现它。为了解决现有研究的这些限制,本研究提出了一种方法,该方法采用一种用于从两个相机获取的驾驶员面部的图像的浅卷积神经网络(CNN),以便自适应地选择更适合于检测眼睛位置的相机图像;更快的R-CNN应用于所选驱动器图像,并且在检测到驾驶员的眼睛之后,通过几何变换矩阵映射另一侧的相机图像的眼睛位置。使用自式Dongguk双相机的驱动数据库(DDCD-DB1)进行实验,包括从车辆内部获得的26名参与者和哥伦比亚凝视数据集(CAVE-DB)打开数据库的图像。结果证实,所提出的方法的性能优于现有方法的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号