首页> 外文OA文献 >Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data
【2h】

Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data

机译:用光学和SAR数据集成卷积神经网络与基于对象的陆地覆盖映射的卷积神经网络和对象的分类后细化

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Object-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine learning, convolutional neural networks (CNNs) have become state-of-the-art algorithms. However, CNNs cannot be easily integrated with OBIA because the processing unit of CNNs is a rectangular image, whereas that of OBIA is an irregular image object. To obtain object-based thematic maps, this study developed a new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data. After producing the classification map by CNN, each image object was labeled with the most frequent land cover category of its pixels. The proposed method was tested on the optical-SAR Sentinel Guangzhou dataset with 10 m spatial resolution, the optical-SAR Zhuhai-Macau local climate zones (LCZ) dataset with 100 m spatial resolution, and a hyperspectral benchmark the University of Pavia with 1.3 m spatial resolution. It outperformed OBIA support vector machine (SVM) and random forest (RF). SVM and RF could benefit more from the combined use of optical and SAR data compared with CNN, whereas spatial information learned by CNN was very effective for classification. With the ability to extract spatial features and maintain object boundaries, the proposed method considerably improved the classification accuracy of urban ground targets. It achieved overall accuracy (OA) of 95.33% for the Sentinel Guangzhou dataset, OA of 77.64% for the Zhuhai-Macau LCZ dataset, and OA of 95.70% for the University of Pavia dataset with only 10 labeled samples per class.
机译:基于对象的图像分析(OBIA)已被广泛地用于土地利用盖(LULC)映射使用光学和合成孔径雷达(SAR)图像,因为它可以利用空间信息,减少盐和胡椒,并描出LULC的效果边界。随着机器学习的最新进展,卷积神经网络(细胞神经网络)已经成为国家的最先进的算法。然而,由于CNNS的处理单元是矩形图像,因此不能与OBIA容易集成CNN,而OBIA的处理单元是不规则的图像对象。为了获得基于对象的主题地图,该研究开发了一种新方法,它使用Sentinel光学和SAR数据集成了对象的分类后细化(OBPR)和LULC映射的CNN。通过CNN制造分类映射后,每个图像对象被标记为其像素的最常见的陆地覆盖类别。该方法在光学SAR Sentinel广州数据集上进行了测试,具有10米的空间分辨率,光学珠海 - 澳门局部气候区(LCZ)数据集具有100米的空间分辨率,以及帕维亚大学拥有1.3米的高光谱基准。空间分辨率。它表现出OBIA支持向量机(SVM)和随机森林(RF)。与CNN相比,SVM和RF可以从光学和SAR数据的结合使用中受益更多,而CNN学习的空间信息对于分类非常有效。利用提取空间特征和维持对象边界的能力,该方法显着提高了城市地面目标的分类准确性。它为珠海 - 澳门LCZ Dataset的Sentinel Guangzhou DataSet提供了95.33%的整体准确性(OA),为77.64%,oa为帕维亚大学数据集,只有10个标有10个标记的样本。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号