首页> 外文会议>German Conference on Pattern Recognition >Context-driven Multi-stream LSTM (M-LSTM) for Recognizing Fine-Grained Activity of Drivers
【24h】

Context-driven Multi-stream LSTM (M-LSTM) for Recognizing Fine-Grained Activity of Drivers

机译:上下文驱动的多流LSTM(M-LSTM),用于识别驱动程序的细粒度活动

获取原文

摘要

Automatic recognition of in-vehicle activities has significant impact on the next generation intelligent vehicles. In this paper, we present a novel Multi-stream Long Short-Term Memory (M-LSTM) network for recognizing driver activities. We bring together ideas from recent works on LSTMs, transfer learning for object detection and body pose by exploring the use of deep convolutional neural networks (CNN). Recent work has also shown that representations such as hand-object interactions are important cues in characterizing human activities. The proposed M-LSTM integrates these ideas under one framework, where two streams focus on appearance information with two different levels of abstractions. The other two streams analyze the contextual information involving configuration of body parts and body-object interactions. The proposed contextual descriptor is built to be semantically rich and meaningful, and even when coupled with appearance features it is turned out to be highly discriminating. We validate this on two challenging datasets consisting driver activities.
机译:车内活动的自动识别对下一代智能车辆产生重大影响。在本文中,我们提出了一种新颖的多流长期短期记忆(M-LSTM)网络,用于识别驾驶员的活动。通过探索深度卷积神经网络(CNN)的使用,我们将LSTM的最新著作,用于对象检测和身体姿势的转移学习的思想汇集在一起​​。最近的工作还表明,诸如手-物体相互作用之类的表示形式是表征人类活动的重要线索。拟议的M-LSTM将这些思想整合在一个框架下,其中两个流专注于具有两个不同抽象级别的外观信息。其他两个流分析涉及身体部位的配置和身体-对象交互的上下文信息。所提出的上下文描述符被构建为语义丰富且有意义的,并且甚至当与外观特征结合使用时,事实证明它也是高度可区分的。我们在由驾驶员活动组成的两个具有挑战性的数据集上对此进行了验证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号