首页> 外文期刊>Computer-Aided Civil and Infrastructure Engineering >A methodology for obtaining spatiotemporal information of the vehicles on bridges based on computer vision
【24h】

A methodology for obtaining spatiotemporal information of the vehicles on bridges based on computer vision

机译:一种基于计算机视觉的桥梁桥梁车辆时空信息获取方法

获取原文
获取原文并翻译 | 示例
       

摘要

Spatiotemporal information of the vehicles on a bridge is important evidence for reflecting the stress state and traffic density of the bridge. A methodology for obtaining the information is proposed based on computer vision technology, which contains the detection by Faster region-based convolutional neural network (Faster R-CNN), multiple object tracking, and image calibration. For minimizing the detection time, the ZF (Zeiler & Fergus) model with five convolutional layers is selected as the shared part between Region Proposal Network and Fast R-CNN in Faster R-CNN. An image data set including 1,694 images is established about eight types of vehicles for training Faster R-CNN. Combined with the detection of each frame of the video, the methods of multiple object tracking and image calibration are developed for acquiring the vehicle parameters, including the length, number of axles, speed, and the lane that the vehicle is in. The method of tracking is mainly based on the judgment of the distances between the vehicle bounding boxes in virtual detection region. As for image calibration, it is based on the moving standard vehicles whose lengths are known, which can be regarded as the 3D template to calculate the vehicle parameters. After acquiring the vehicles' parameters, the spatiotemporal information of the vehicles can be obtained. The proposed system has a frame rate of 16 fps and only needs two cameras as the input device. The system is successfully applied on a double tower cable-stayed bridge, and the identification accuracies of the types and number of axles are about 90 and 73% in the virtual detection region, and the speed errors of most vehicles are less than 6%.
机译:桥梁上车辆的时空信息是反映桥梁应力状态和交通密度的重要证据。提出了一种基于计算机视觉技术的信息获取方法,该方法包括通过基于快速区域的卷积神经网络(Faster R-CNN)进行检测,多目标跟踪和图像校准。为了使检测时间最短,选择了具有五个卷积层的ZF(Zeiler和Fergus)模型作为“区域建议网络”和Faster R-CNN中Fast R-CNN的共享部分。建立了约1,694张图像的图像数据集,用于训练八种更快的R-CNN车辆。结合视频各帧的检测,开发了多目标跟踪和图像校准的方法来获取车辆参数,包括长度,轴数,速度和车辆所在的车道。跟踪主要基于对虚拟检测区域中车辆边界框之间的距离的判断。对于图像校准,它基于已知长度的移动标准车辆,可以将其视为用于计算车辆参数的3D模板。在获取了车辆的参数之后,可以获得车辆的时空信息。拟议的系统具有16 fps的帧速率,仅需要两个摄像机作为输入设备。该系统成功应用于双塔斜拉桥上,在虚拟检测区域,轴类型和数量的识别精度分别为90%和73%,大多数车辆的速度误差小于6%。

著录项

  • 来源
  • 作者单位

    Southeast Univ, Sch Civil Engn, Jiangsu Key Lab Engn Mech, Nanjing 210018, Jiangsu, Peoples R China;

    Southeast Univ, Sch Civil Engn, Jiangsu Key Lab Engn Mech, Nanjing 210018, Jiangsu, Peoples R China;

    Southeast Univ, Sch Civil Engn, Jiangsu Key Lab Engn Mech, Nanjing 210018, Jiangsu, Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号