【24h】

Driver Gaze Zone Dataset With Depth Data

机译:司机凝视区域数据集具有深度数据

获取原文

摘要

Drivers' inattention and distraction detection are crucial features for driver monitoring and driver assistance systems (DAS) and a key point to avoid car crashes. Currently available datasets mainly track head pose and only a few were collected inside a car. Therefore, we present a novel public dataset designed to train algorithms that estimate driver gaze zone in real driving conditions. We provide labeled frame-by-frame images from 19 car points. Depth 3D images were also captured and they are aligned with infrared images. This new dataset is the largest one available for this purpose and the only one that provides 2D and 3D data aligned at pixel using a Intel? RealSense? camera. Finally, in order to establish a baseline, we present a gaze zone estimator built using a transfer learning method.
机译:司机的疏忽和分心检测是驾驶员监控和驾驶员辅助系统(DAS)和避免车祸的关键点的关键特征。目前可用的数据集主要是轨道头部姿势,只有少数人在汽车内收集。因此,我们展示了一种新的公共数据集,旨在培训估算实际驾驶条件中的驾驶员凝视区域的算法。我们从19个汽车点提供标记的逐帧图像。还捕获深度3D图像,它们与红外图像对齐。这个新数据集是可用于此目的的最大值,并且唯一一个提供在像素的像素对齐的2D和3D数据使用英特尔 RealSense?相机。最后,为了建立基线,我们介绍了使用传输学习方法构建的凝视区域估计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号