首页> 中文期刊> 《测绘学报》 >高分辨率遥感影像场景的多尺度神经网络分类法

高分辨率遥感影像场景的多尺度神经网络分类法

         

摘要

High resolution remote sensing imagery scene classification is important for automatic complex scene recognition,which is the key technology for military and disaster relief,etc.In this paper,we propose a novel joint multi-scale convolution neural network (JMCNN)method using a limited amount of image data for high resolution remote sensing imagery scene classification.Different from traditional convolutional neural network,the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation,which includes multi-channel feature extractor,joint multi-scale feature fusion and Softmax classifier.Multi-channel and scale convolutional extractors are used to extract scene middle features,firstly.Then,in order to achieve enhanced high-level feature representation in a limit dataset,joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions.Finally,enhanced high-level feature representation can be used for classification by Softmax.Experiments were conducted using two limit public UCM and SIRI datasets.Compared to state-of-the-art methods,the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.%高分辨率遥感影像场景分类是实现复杂场景快速自动识别的基础,在军事、救灾等领域有十分重要的意义.为了在有限的遥感数据集上获得高识别精度,本文提出了一种基于联合多尺度卷积神经网络模型的高分辨率遥感影像场景分类方法.不同于传统的卷积神经网络模型,JMCNN建立了一个具有3个不同尺度通道的端对端多尺度联合卷积网络模型,包括多通道特征提取器、多尺度特征联合和Softmax分类3个部分.首先,多通道特征提取器提取图像中、高层多尺度特征;然后,多尺度特征联合对多个通道的中、高层多尺度特征进行多次融合以增强特征表达;最后,Softmax对高层特征进行分类.本文在UC Merced和SIRI遥感数据集进行测试,试验表明JMCNN模型在特征表达和计算速度方面均有显著提高,在小样本数据量下分别达到89.3%和88.3%的识别精度.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号