首页> 外文期刊>IEEE Transactions on Geoscience and Remote Sensing >RoadNet: Learning to Comprehensively Analyze Road Networks in Complex Urban Scenes From High-Resolution Remotely Sensed Images
【24h】

RoadNet: Learning to Comprehensively Analyze Road Networks in Complex Urban Scenes From High-Resolution Remotely Sensed Images

机译:RoadNet:学习从高分辨率遥感影像中全面分析复杂城市场景中的道路网络

获取原文
获取原文并翻译 | 示例
           

摘要

It is a classical task to automatically extract road networks from very high-resolution (VHR) images in remote sensing. This paper presents a novel method for extracting road networks from VHR remotely sensed images in complex urban scenes. Inspired by image segmentation, edge detection, and object skeleton extraction, we develop a multitask convolutional neural network (CNN), called RoadNet, to simultaneously predict road surfaces, edges, and centerlines, which is the first work in such field. The RoadNet solves seven important issues in this vision problem: 1) automatically learning multiscale and multilevel features [gained by the deeply supervised nets (USN) providing integrated direct supervision] to cope with the roads in various scenes and scales; 2) holistically training the mentioned tasks in a cascaded end-to-end CNN model; 3) correlating the predictions of road surfaces, edges, and centerlines in a network model to improve the multitask prediction; 4) designing elaborate architecture and loss function, by which the well-trained model produces approximately single-pixel width road edges/centerlines without nonmaximum suppression postprocessing; 5) cropping and bilinear blending to deal with the large VHR images with finite-computing resources; 6) introducing rough and simple user interaction to obtain desired predictions in the challenging regions; and 7) establishing a benchmark data set which consists of a series of VHR remote sensing images with pixelwise annotation. Different from the previous works, we pay more attention to the challenging situations, in which there are lots of shadows and occlusions along the road regions. Experimental results on two benchmark data sets show the superiority of our proposed approaches.
机译:从遥感中的超高分辨率(VHR)图像中自动提取道路网络是一项经典任务。本文提出了一种从复杂城市场景中的VHR遥感图像中提取道路网络的新方法。受图像分割,边缘检测和对象骨架提取的启发,我们开发了一种称为RoadNet的多任务卷积神经网络(CNN),以同时预测路面,边缘和中心线,这是该领域的第一项工作。 RoadNet解决了此视觉问题中的七个重要问题:1)自动学习多尺度和多层次特征(由深度监督网络(USN)提供集成的直接监督)来应对各种场景和规模的道路; 2)在端到端的CNN级联模型中全面训练提到的任务; 3)在网络模型中关联道路表面,边缘和中心线的预测,以改善多任务预测; 4)设计精细的架构和损失函数,通过训练有素的模型可以生成近似单像素宽度的道路边缘/中心线,而不会进行非最大抑制后处理; 5)裁剪和双线性融合,以有限的计算资源处理大型VHR图像; 6)引入粗略和简单的用户交互,以在挑战性区域获得所需的预测; 7)建立基准数据集,该基准数据集由一系列带有像素注释的VHR遥感图像组成。与以前的作品不同,我们更加关注具有挑战性的情况,在这种情况下,道路区域存在大量阴影和遮挡。在两个基准数据集上的实验结果表明了我们提出的方法的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号