首页> 外文期刊>Remote Sensing >Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks
【24h】

Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks

机译:使用深三流卷积神经网络的高光谱和LiDAR融合

获取原文
           

摘要

Recently, convolutional neural networks (CNN) have been intensively investigated for the classification of remote sensing data by extracting invariant and abstract features suitable for classification. In this paper, a novel framework is proposed for the fusion of hyperspectral images and LiDAR-derived elevation data based on CNN and composite kernels. First, extinction profiles are applied to both data sources in order to extract spatial and elevation features from hyperspectral and LiDAR-derived data, respectively. Second, a three-stream CNN is designed to extract informative spectral, spatial, and elevation features individually from both available sources. The combination of extinction profiles and CNN features enables us to jointly benefit from low-level and high-level features to improve classification performance. To fuse the heterogeneous spectral, spatial, and elevation features extracted by CNN, instead of a simple stacking strategy, a multi-sensor composite kernels (MCK) scheme is designed. This scheme helps us to achieve higher spectral, spatial, and elevation separability of the extracted features and effectively perform multi-sensor data fusion in kernel space. In this context, a support vector machine and extreme learning machine with their composite kernels version are employed to produce the final classification result. The proposed framework is carried out on two widely used data sets with different characteristics: an urban data set captured over Houston, USA, and a rural data set captured over Trento, Italy. The proposed framework yields the highest OA of 92 . 57 % and 97 . 91 % for Houston and Trento data sets. Experimental results confirm that the proposed fusion framework can produce competitive results in both urban and rural areas in terms of classification accuracy, and significantly mitigate the salt and pepper noise in classification maps.
机译:最近,通过提取适合分类的不变特征和抽象特征,对卷积神经网络(CNN)进行了深入研究,以用于遥感数据的分类。本文提出了一种基于CNN和复合核的高光谱图像与LiDAR衍生高程数据融合的新框架。首先,将消光轮廓应用于这两个数据源,以便分别从高光谱和LiDAR衍生数据中提取空间和海拔特征。其次,设计了三流CNN,分别从两个可用来源中提取信息量大的光谱,空间和高程特征。灭绝配置文件和CNN功能的组合使我们能够共同受益于低级和高级功能,以提高分类性能。为了融合CNN提取的异构光谱,空间和海拔特征,而不是简单的堆叠策略,设计了一种多传感器复合内核(MCK)方案。该方案有助于我们在提取的特征上实现更高的光谱,空间和高程可分离性,并有效地在内核空间中执行多传感器数据融合。在这种情况下,采用支持向量机和极限学习机及其复合内核版本来产生最终的分类结果。拟议的框架是基于两个具有不同特征的,广泛使用的数据集执行的:在美国休斯敦捕获的城市数据集和在意大利特伦托捕获的农村数据集。提出的框架产生了92的最高OA。 57%和97。休斯顿和特伦托数据集的比例为91%。实验结果证实,所提出的融合框架在分类准确度方面可以在城乡地区产生竞争性结果,并显着减轻分类图中的盐和胡椒噪声。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号