首页> 外文期刊>ACM Transactions on Spatial Algorithms and Systems >Intelligent Intersection: Two-stream Convolutional Networks for Real-time Near-accident Detection in Traffic Video
【24h】

Intelligent Intersection: Two-stream Convolutional Networks for Real-time Near-accident Detection in Traffic Video

机译:智能交叉点:两流卷积网络,用于交通视频中的实时近乎事故检测

获取原文
获取原文并翻译 | 示例

摘要

Camera-based systems are increasingly used for collecting information on intersections and arterials. Unlike loop controllers that can generally be only used for detection and movement of vehicles, cameras can provide rich information about the traffic behavior. Vision-based frameworks for multiple-object detection, object tracking, and near-miss detection have been developed to derive this information. However, much of this work currently addresses processing videos offline. In this article, we propose an integrated two-stream convolutional networks architecture that performs real-time detection, tracking, and near-accident detection of road users in traffic video data. The two-stream model consists of a spatial stream network for object detection and a temporal stream network to leverage motion features for multiple-object tracking. We detect near-accidents by incorporating appearance features and motion features from these two networks. Further, we demonstrate that our approaches can be executed in real-time and at a frame rate that is higher than the video frame rate on a variety of videos collected from fisheye and overhead cameras.
机译:基于相机的系统越来越多地用于收集有关交叉口和动脉的信息。与通常只用于车辆的检测和移动的环路控制器不同,摄像机可以提供有关流行行为的丰富信息。已经开发出用于多对象检测,对象跟踪和近似小姐检测的基于视觉的框架来导出这些信息。但是,大部分工作目前正在处理脱机处理视频。在本文中,我们提出了一个集成的两流卷积网络架构,用于在交通视频数据中执行实时检测,跟踪和近乎事故检测。两流模型包括用于对象检测的空间流网络和用于利用用于多对象跟踪的运动特征的时间流网络。我们通过从这两个网络中包含外观特征和运动功能来检测近乎意外的。此外,我们证明我们的方法可以实时和以高于来自Fisheye和架空摄像机收集的各种视频上的视频帧速率的帧速率执行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号