首页> 外文会议>IEEE Vehicular Technology Conference >Predicting Steering Actions for Self-Driving Cars Through Deep Learning
【24h】

Predicting Steering Actions for Self-Driving Cars Through Deep Learning

机译:通过深入学习预测自动驾驶汽车的转向动作

获取原文

摘要

We propose a visual-based end to end lane following system which fuses temporal and spatial visual information to predict current and future control variables. Previous works only predict control variables for the next time point with the current visual information. In contrast, based on a long-term recurrent convolutional neural network, we investigate the effect of fusing history information of different lengths to predict the imminent control variable in different future horizons. Experimental results show that with long history visual information, the neural network can approximate human driving behaviours with high precision. Consistent with intuition is that the influence of history information declines as time moves forward. Meanwhile, history information of the past 0.6 seconds is of most information for the prediction, and the Mean Square Error (MSE) for the steering command prediction with 0.6s history information is 8.378 × 10~(-3) m~(-1). By training the model with control signals that lag behind visual information as targets, the testing result shows that it is possible to predict future control variables with great accuracy, while the best prediction accuracy happens to the steering command 0.4 seconds later.
机译:我们提出了一个基于视觉的端到端车道追踪系统,该系统融合了时间和空间的视觉信息来预测当前和未来的控制变量。以前的作品只能预测控制变量与当前的视觉信息的下一个时间点。相比之下,基于一个长期反复发作卷积神经网络中,我们探讨融合不同长度的历史信息来预测未来不同的视野紧迫控制变量的影响。实验结果表明,与历史悠久的视觉信息,神经网络可以近似人类的驾驶行为具有精度高。与直觉一致的是,历史信息的影响力下降向前移动时间。同时,在过去的0.6秒历史信息是最信息用于预测,和均方误差(MSE),用于与0.6秒历史信息的转向指令预测为8.378×10〜(-3)M〜(-1) 。通过训练与落后的视觉信息作为目标控制信号模型,测试结果表明,有可能预测未来控制变量丝丝入扣,而最好的预测精度发生了转向指令0.4秒后。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号