...
首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >A fusion network for road detection via spatial propagation and spatial transformation
【24h】

A fusion network for road detection via spatial propagation and spatial transformation

机译:通过空间传播和空间转换进行道路检测的融合网络

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we address the fusion of image and point cloud data for road detection. To take advantage of both deep network and multi-modal data fusion, we propose an end-to-end road segmentation network called SPSTFN (Spatial Propagation and Spatial Transformation Fusion Network). Our method considers the model-level fusion and dual-view fusion in the network simultaneously for the first time. Specifically, the proposed SPSTFN contains three parts: the point cloud branch, the image branch, and the fusion block. Firstly, we design a simple but efficient lightweight network to handle the unordered and sparse point cloud to obtain a coarse representation of the road area. Then, an equal-resolution convolutional block is adopted to capture the low-level features of the image which are used to produce the heat diffusion coefficients of the joint anisotropic diffusion based spatial propagation model. Thirdly, we conduct the diffusion process on the coarse representation under the guidance of the learned low-level image features, both in the perspective and bird views, via the spatial transformation in the network. Finally, the diffusion results of the two views are then integrated to generate the final refined representation of the road area. The proposed fusion method is totally data-driven and parameter-free, and the whole fusion network can be trained with the standard BP (Back Propagation) algorithm. Without any additional process steps and pre-training, the proposed method obtains competitive results on the KITTI Road Benchmark. (C) 2019 Elsevier Ltd. All rights reserved.
机译:在本文中,我们解决了用于道路检测的图像和点云数据的融合。为了利用深度网络和多模态数据融合,我们提出了一个名为SPSTFN(空间传播和空间变换融合网络)的端到端道路分割网络。我们的方法在第一次同时考虑网络中的模型级融合和双视图融合。具体而言,所提出的SPSTFN包含三个部分:点云分支,图像分支和融合块。首先,我们设计一个简单但高效的轻量级网络来处理无序和稀疏点云以获得道路区域的粗略表示。然后,采用相等分辨率的卷积块来捕获用于产生基于联合各向异性扩散的空间传播模型的热扩散系数的图像的低级特征。第三,我们通过网络中的空间转换来在学习的低级图像特征的指导下对粗俗表示的扩散过程进行粗略表示,通过网络中的空间转换。最后,然后集成了两个视图的扩散结果以产生道路区域的最终精制表示。所提出的融合方法是完全数据驱动和无参数的,并且整个融合网络可以用标准BP(反向传播)算法训练。如果没有任何额外的流程步骤和预培训,所提出的方法在基蒂路基基准上获得竞争结果。 (c)2019年elestvier有限公司保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号