首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >Developing a multi-filter convolutional neural network for semantic segmentation using high-resolution aerial imagery and LiDAR data
【24h】

Developing a multi-filter convolutional neural network for semantic segmentation using high-resolution aerial imagery and LiDAR data

机译:使用高分辨率的航空影像和LiDAR数据开发用于语义分割的多过滤器卷积神经网络

获取原文
获取原文并翻译 | 示例
       

摘要

Semantic segmentation of LiDAR and high-resolution aerial imagery is one of the most challenging topics in the remote sensing domain. Deep convolutional neural network (CNN) and its derivatives have recently shown the abilities in pixel-wise prediction of remote sensing data. Many existing deep learning methods fuse LiDAR and high-resolution aerial imagery towards an inter-modal mode and thus overlook the intra-modal statistical characteristics. Additionally, the patch-based CNNs could generate the salt-and-pepper artifacts as characterized by isolated and spurious pixels on the object boundaries and patch edges leading to unsatisfied labelling results. This paper presents a semantic segmentation scheme that combines multi-filter CNN and multi-resolution segmentation (MRS). The multi-filter CNN aggregates LiDAR data and high-resolution optical imagery by multi-modal data fusion for semantic labelling, and the MRS is further used to delineate object boundaries for reducing the salt-and-pepper artifacts. The proposed method is validated against two datasets: the ISPRS 2D semantic labelling contest of Potsdam and an area of Guangzhou in China labelled based on existing geodatabases. Various designs of data fusion strategy, CNN architecture and MRS scale are analyzed and discussed. Compared with other classification methods, our method improves the overall accuracies. Experiment results show that our combined method is an efficient solution for the semantic segmentation of LiDAR and high-resolution imagery.
机译:LiDAR的语义分割和高分辨率的航空影像是遥感领域最具挑战性的主题之一。深度卷积神经网络(CNN)及其派生技术最近显示了在遥感数据的像素级预测中的功能。许多现有的深度学习方法将LiDAR和高分辨率航空影像融合为一种模式间模式,从而忽略了模式内统计特征。此外,基于补丁的CNN可能会产生盐和胡椒的伪影,其特征是对象边界和补丁边缘上的孤立像素和虚假像素,导致标记结果不满意。本文提出了一种语义分割方案,该方案结合了多过滤器CNN和多分辨率分割(MRS)。多过滤器CNN通过多模式数据融合来聚合LiDAR数据和高分辨率光学图像以进行语义标记,并且MRS进一步用于描绘对象边界以减少盐和胡椒的伪影。该方法针对两个数据集进行了验证:波茨坦的ISPRS 2D语义标注竞赛和基于现有地理数据库标注的中国广州地区。分析并讨论了数据融合策略,CNN架构和MRS规模的各种设计。与其他分类方法相比,我们的方法提高了总体准确性。实验结果表明,我们的组合方法是一种有效的LiDAR语义分割和高分辨率图像解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号