首页> 外文期刊>Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of >Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification
【24h】

Self-Supervised Pretraining of Transformers for Satellite Image Time Series Classification

机译:用于卫星图像时间序列分类的变压器自我监督预借鉴

获取原文
获取原文并翻译 | 示例
       

摘要

Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for the SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data are scarce. To address this problem, we propose a novel self-supervised pretraining scheme to initialize a transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pretraining is completed, the pretrained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed pretraining scheme, leading to substantial improvements in classification accuracy using transformer, 1-D convolutional neural network, and bidirectional long short-term memory network. The code and the pretrained model will be available at https://github.com/linlei1214/SITS-BERT upon publication .
机译:卫星图像时间序列(SITS)分类是遥感中的一个主要研究主题,与各种应用相关。符合符合符合分类的深度学习方法,并提供了最先进的性能。然而,当标记数据稀缺时,深度学习方法遭受过度装备。为了解决这个问题,我们提出了一种新颖的自我监督预测方案,通过利用大规模的未标记数据来初始化基于变换器的网络。详细地,要求该模型预测给定整个像素的整个时间序列的随机污染的观察。我们提案的主要思想是利用卫星时间序列的固有时间结构,以学习与土地覆盖语义相关的通用光谱时间表示。一旦预先预先完成,通过微调与小型任务相关标记数据的所有模型参数进行微调,可以进一步调整到各种坐在分类任务的普试网络。通过这种方式,关于坐姿的一般知识和表示可以转移到标签稀缺任务,从而提高了模型的泛化性能以及降低过度装备的风险。在大型研究领域的三个基准数据集中进行了综合实验。实验结果表明了拟议的预介质方案的有效性,从而实现了使用变压器,1-D卷积神经网络和双向长期内记忆网络的分类精度的大量改进。代码和预用模型将在 https://github.com/linlei1214/sits-bert在发布时

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号