首页> 外文期刊>IEEE Transactions on Intelligent Transportation Systems >Detecting Driver Behavior Using Stacked Long Short Term Memory Network With Attention Layer
【24h】

Detecting Driver Behavior Using Stacked Long Short Term Memory Network With Attention Layer

机译:使用带有注意层的堆叠的长短期内存网络检测驱动程序行为

获取原文
获取原文并翻译 | 示例
       

摘要

Driver distraction is one of the primary reasons for fatal car accidents. Modern cars with advanced infotainment systems often take some cognitive attention away from the road, consequently causing more distraction. Driver behavior analysis can be used to address the driver distraction problem. Three important features of intelligence and cognition are perception, attention and sensory memory. In this work, we use a stacked LSTM network with attention to detect driver distraction using driving data and compare this model with both stacked LSTM and MLP models to show the positive effect of using attention mechanism on the model's performance. We conducted an experiment with eight driving scenarios and collected a large dataset of driving data. First, an MLP was built to detect driver distraction. Next, we increased the intelligence level of the system by using an LSTM network. Third, we used the attention mechanism increment on the top of the LSTM model to enhance the model performance. We show that these three increments increase intelligence by reducing train and test error. The minimum train and test error of the stacked LSTM were 0.57 and 0.9 that were 0.4 less than the MLP minimum train and test error. Adding attention to the stacked LSTM model decreased the train and test error to 0.69 and 0.75. Results also show diminished the overfitting problem and reduction in computational expenses when adding attention.
机译:司机分心是致命汽车事故的主要原因之一。具有先进信息娱乐系统的现代汽车往往从道路上采取一些认知的关注,因此导致更加分散注意力。驾驶员行为分析可用于解决驾驶员分散化问题。智力和认知的三个重要特征是感知,关注和感官记忆。在这项工作中,我们使用堆叠的LSTM网络,注意使用驾驶数据检测驱动器分散注意力,并将此模型与堆叠的LSTM和MLP模型进行比较,以显示使用注意机制对模型性能的积极影响。我们进行了一个有八种驾驶场景的实验,并收集了一个大型的驾驶数据数据集。首先,建立一个MLP以检测驾驶员分心。接下来,我们通过使用LSTM网络增加了系统的智能水平。第三,我们使用了LSTM模型顶部的注意机制增量,以增强模型性能。我们表明这三种增量通过减少火车和测试错误来增加智能。堆叠LSTM的最小列车和测试误差为0.57和0.9,比MLP最小列车和测试误差小0.4。增加对堆叠的LSTM模型的注意力减少了火车,测试误差为0.69和0.75。结果还显示了在增加注意时减少了过度的问题和减少计算费用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号