首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery
【24h】

Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery

机译:在PlanetScope和Sentinel-2影像中进行深度学习以对云,阴影和土地覆盖场景进行多模式分类

获取原文
获取原文并翻译 | 示例

摘要

With the increasing availability of high-resolution satellite imagery it is important to improve the efficiency and accuracy of satellite image indexing, retrieval and classification. Furthermore, there is a need for utilizing all available satellite imagery in identifying general land cover types and monitoring their changes through time irrespective of their spatial, spectral, temporal and radiometric resolutions. Therefore, in this study, we developed deep learning models able to efficiently and accurately classify cloud, shadow and land cover scenes in different high-resolution (< 10 m) satellite imagery. Specifically, we trained deep convolutional neural network (CNN) models to perform multi-label classification of multi-modal, high-resolution satellite imagery at the scene level. Multi-label classification at the scene level (a.k.a. image indexing), as opposed to the pixel level, allows for faster performance, higher accuracy (although at the cost of detail) and higher generalizability. We investigated the generalization ability (i.e. cross-dataset and geographic independence) of individual and ensemble CNN models trained on multi-modal satellite imagery (i.e. PlanetScope and Sentinel-2). The models trained on PlanetScope imagery collected over the Amazon performed well when applied to PlanetScope and Sentinel-2 imagery collected over the Wet Tropics of Australia with an F-2 score of 0.72 and 0.69, respectively. Similarly, PlanetScope-based CNN models trained on imagery collected over the Wet Tropics of Australia performed well when applied to Sentinel-2 imagery with an F-2 score of 0.76, and the reverse scenario resulted in the same F-2 score of 0.76. This suggests that our CNN models have high cross-dataset generalization ability and are suitable for classifying cloud, shadow and land cover classes in satellite imagery with resolutions from 3 m (PlanetScope) to 10 m (Sentinel-2). The performance of our CNN models was also comparable to the state-of-the-art methods (i.e. Sen2Cor and MACCS) developed specifically for classifying cloud and shadow classes in Sentinel-2 imagery. Finally, we show the potential of our CNN models to mask cloud and shadow contaminated areas from PlanetScope- and Sentinel-2-derived NDVI time-series.
机译:随着高分辨率卫星图像可用性的提高,提高卫星图像索引,检索和分类的效率和准确性非常重要。此外,需要利用所有可用的卫星图像来识别一般的土地覆盖类型并通过时间监视其变化,而无论其空间,光谱,时间和辐射分辨率如何。因此,在本研究中,我们开发了深度学习模型,能够在不同的高分辨率(<10 m)卫星图像中有效且准确地对云,阴影和土地覆盖场景进行分类。具体来说,我们训练了深度卷积神经网络(CNN)模型以在场景级别对多模式,高分辨率卫星图像进行多标签分类。与像素级别相反,场景级别的多标签分类(也称为图像索引)可实现更快的性能,更高的准确性(尽管以细节为代价)和更高的通用性。我们研究了在多模式卫星图像(即PlanetScope和Sentinel-2)上训练的单个和整体CNN模型的泛化能力(即跨数据集和地理独立性)。在通过Amazon收集的PlanetScope影像训练的模型应用于F-2分别为0.72和0.69的澳大利亚湿热带地区收集的PlanetScope和Sentinel-2影像时,效果很好。类似地,在澳大利亚湿热带地区收集的图像上训练的基于PlanetScope的CNN模型在F-2得分为0.76的Sentinel-2图像上表现良好,而相反的情况则导致F-2得分为0.76。这表明我们的CNN模型具有很高的跨数据集泛化能力,适用于以3 m(PlanetScope)到10 m(Sentinel-2)的分辨率对卫星图像中的云,阴影和土地覆盖类别进行分类。我们的CNN模型的性能也可以与专门用于对Sentinel-2影像中的云和阴影类别进行分类的最新方法(即Sen2Cor和MACCS)相媲美。最后,我们展示了CNN模型可以掩盖来自PlanetScope和Sentinel-2的NDVI时间序列中的云和阴影污染区域的潜力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号