首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Event-Based Vision Meets Deep Learning on Steering Prediction for Self-Driving Cars
【24h】

Event-Based Vision Meets Deep Learning on Steering Prediction for Self-Driving Cars

机译:基于事件的愿景与无人驾驶汽车转向预测的深度学习相遇

获取原文

摘要

Event cameras are bio-inspired vision sensors that naturally capture the dynamics of a scene, filtering out redundant information. This paper presents a deep neural network approach that unlocks the potential of event cameras on a challenging motion-estimation task: prediction of a vehicle's steering angle. To make the best out of this sensor-algorithm combination, we adapt state-of-the-art convolutional architectures to the output of event sensors and extensively evaluate the performance of our approach on a publicly available large scale event-camera dataset (~1000 km). We present qualitative and quantitative explanations of why event cameras allow robust steering prediction even in cases where traditional cameras fail, e.g. challenging illumination conditions and fast motion. Finally, we demonstrate the advantages of leveraging transfer learning from traditional to event-based vision, and show that our approach outperforms state-of-the-art algorithms based on standard cameras.
机译:事件摄像机是受生物启发的视觉传感器,可以自然地捕获场景的动态,从而过滤掉多余的信息。本文提出了一种深度神经网络方法,该方法可释放事件摄像机在具有挑战性的运动估计任务:预测车辆转向角上的潜力。为了充分利用这种传感器-算法组合,我们将最先进的卷积架构适应事件传感器的输出,并在可公开获得的大规模事件相机数据集(约1000个)上广泛评估了我们方法的性能公里)。我们提供了定性和定量的解释,说明即使在传统相机发生故障的情况下,事件相机为何仍能提供可靠的转向预测。具有挑战性的照明条件和快速运动。最后,我们展示了将转移学习从传统视觉转换为基于事件的视觉的优势,并表明我们的方法优于基于标准相机的最新算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号