首页> 外文期刊>IEEE Transactions on Geoscience and Remote Sensing >Multiresolution Multimodal Sensor Fusion for Remote Sensing Data With Label Uncertainty
【24h】

Multiresolution Multimodal Sensor Fusion for Remote Sensing Data With Label Uncertainty

机译:具有标签不确定性的远程传感数据的多数组多模数传感器融合

获取原文
获取原文并翻译 | 示例

摘要

In remote sensing, each sensor can provide complementary or reinforcing information. It is valuable to fuse outputs from multiple sensors to boost overall performance. Previous supervised fusion methods often require accurate labels for each pixel in the training data. However, in many remote-sensing applications, pixel-level labels are difficult or infeasible to obtain. In addition, outputs from multiple sensors often have different resolutions or modalities. For example, rasterized hyperspectral imagery (HSI) presents data in a pixel grid while airborne light detection and ranging (LiDAR) generates dense 3-D point clouds. It is often difficult to directly fuse such multimodal, multiresolution data. To address these challenges, we present a novel multiple instance multiresolution fusion (MIMRF) framework that can fuse multiresolution and multimodal sensor outputs while learning from automatically generated, imprecisely labeled data. Experiments were conducted on the MUUFL Gulfport HSI and LiDAR data set and a remotely sensed soybean and weed data set. Results show improved, consistent performance on scene understanding and agricultural applications when compared to traditional fusion methods.
机译:在遥感中,每个传感器可以提供互补或增强信息。从多个传感器的熔断器输出是有价值的,以提高整体性能。以前的监督融合方法通常需要训练数据中的每个像素准确的标签。然而,在许多遥感应用中,像素级标签难以或无法获得。此外,来自多个传感器的输出通常具有不同的分辨率或方式。例如,光栅化的高光谱图像(HSI)呈现在像素网格中的数据,而空机光检测和测距(LIDAR)产生致密的3-D点云。通常难以直接熔断这种多峰,多数组数据。为了解决这些挑战,我们提出了一种新型多种实例多分辨率融合(MIMRF)框架,可以保险熔断多数组和多模式传感器输出,同时学习自动生成,不切实际的标记数据。在Muufl Gulfport HSI和LIDAR数据集上进行实验,以及远程感测的大豆和杂草数据集。结果显示,与传统融合方法相比,在现场了解和农业应用方面表现出了改进,一致的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号