首页> 外文期刊>Computer-Aided Civil and Infrastructure Engineering >A methodology for obtaining spatiotemporal information of the vehicles on bridges based on computer vision
【24h】

A methodology for obtaining spatiotemporal information of the vehicles on bridges based on computer vision

机译:基于计算机视觉获得桥梁车辆时空信息的方法

获取原文
获取原文并翻译 | 示例
           

摘要

Spatiotemporal information of the vehicles on a bridge is important evidence for reflecting the stress state and traffic density of the bridge. A methodology for obtaining the information is proposed based on computer vision technology, which contains the detection by Faster region-based convolutional neural network (Faster R-CNN), multiple object tracking, and image calibration. For minimizing the detection time, the ZF (Zeiler & Fergus) model with five convolutional layers is selected as the shared part between Region Proposal Network and Fast R-CNN in Faster R-CNN. An image data set including 1,694 images is established about eight types of vehicles for training Faster R-CNN. Combined with the detection of each frame of the video, the methods of multiple object tracking and image calibration are developed for acquiring the vehicle parameters, including the length, number of axles, speed, and the lane that the vehicle is in. The method of tracking is mainly based on the judgment of the distances between the vehicle bounding boxes in virtual detection region. As for image calibration, it is based on the moving standard vehicles whose lengths are known, which can be regarded as the 3D template to calculate the vehicle parameters. After acquiring the vehicles' parameters, the spatiotemporal information of the vehicles can be obtained. The proposed system has a frame rate of 16 fps and only needs two cameras as the input device. The system is successfully applied on a double tower cable-stayed bridge, and the identification accuracies of the types and number of axles are about 90 and 73% in the virtual detection region, and the speed errors of most vehicles are less than 6%.
机译:桥上车辆的时空信息是反映桥梁的应力状态和交通密度的重要证据。基于计算机视觉技术提出了一种用于获得信息的方法,该计算机视觉技术包含基于更快的基于区域的卷积神经网络(更快的R-CNN),多个对象跟踪和图像校准检测。为了最小化检测时间,选择具有五个卷积层的ZF(Zeiler和Fergus)模型作为区域提案网络和更快的R-CNN中的快速R-CNN之间的共享部分。包括1,694个图像的图像数据集是约8种类型的车辆,用于训练更快的R-CNN。结合检测到视频的每一帧,开发了多个物体跟踪和图像校准的方法,用于获取车辆参数,包括车辆所在的长度,轴,速度和车道。的方法跟踪主要基于虚拟检测区域中车辆边界框之间的距离的判断。至于图像校准,它基于其长度所知的移动标准车辆,其可以被视为用于计算车辆参数的3D模板。在获取车辆参数之后,可以获得车辆的时空信息。所提出的系统具有16 FPS的帧速率,仅需要两个摄像机作为输入设备。该系统成功地应用于双塔电缆留钢桥,并且在虚拟检测区域中的类型和数量的识别精度约为90%,并且大多数车辆的速度误差小于6%。

著录项

  • 来源
  • 作者单位

    Southeast Univ Sch Civil Engn Jiangsu Key Lab Engn Mech Nanjing 210018 Jiangsu Peoples R China;

    Southeast Univ Sch Civil Engn Jiangsu Key Lab Engn Mech Nanjing 210018 Jiangsu Peoples R China;

    Southeast Univ Sch Civil Engn Jiangsu Key Lab Engn Mech Nanjing 210018 Jiangsu Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号