首页> 外文会议>Satellite data compression, communication, and processing IV >Clusters versus FPGAs for Spectral Mixture Analysis-Based Lossy Hyperspectral Data Compression
【24h】

Clusters versus FPGAs for Spectral Mixture Analysis-Based Lossy Hyperspectral Data Compression

机译:集群与FPGA相结合的基于谱混合分析的有损高光谱数据压缩

获取原文
获取原文并翻译 | 示例

摘要

The increasing number of airborne and satellite platforms that incorporate hyperspectral imaging spectrometers has soon created the need for efficient storage, transmission and data compression methodologies. In particular, hyperspectral data compression is expected to play a crucial role in many remote sensing applications. Many efforts have been devoted to designing and developing lossless and lossy algorithms for hyperspectral imagery. However, most available lossy compression approaches have largely overlooked the impact of mixed pixels and subpixel targets, which can be accurately modeled and uncovered by resorting to the wealth of spectral information provided by hyperspectral image data. In this paper, we develop a simple lossy compression technique which relies on the concept of spectral unmixing, one of the most popular approaches to deal with mixed pixels and subpixel targets in hyperspectral analysis. The proposed method uses a two-stage approach in which the purest spectral signatures (also called endmembers) are first extracted from the input data, and then used to express mixed pixels as linear combinations of endmembers. Analytical and experimental results are presented in the context of a real application, using hyperspectral data collected by NASA's Jet Propulsion Laboratory over the World Trade Center area in New York City, right after the terrorist attacks of September 11th. These data are used in this work to evaluate the impact of compression using different methods on spectral signature quality for accurate detection of hot spot fires. Two parallel implementations are developed for the proposed lossy compression algorithm: a multiprocessor implementation tested on Thunderhead, a massively parallel Beowulf cluster at NASA's Goddard Space Flight Center, and a hardware implementation developed on a Xilinx Virtex-II FPGA device. Combined, these parts offer a thoughtful perspective on the potential and emerging challenges of incorporating parallel data compression techniques into realistic hyperspectral imaging problems.
机译:结合高光谱成像光谱仪的机载和卫星平台的数量不断增加,很快就需要有效的存储,传输和数据压缩方法。特别是,高光谱数据压缩有望在许多遥感应用中发挥关键作用。已经致力于设计和开发用于高光谱图像的无损和有损算法。但是,大多数可用的有损压缩方法在很大程度上忽略了混合像素和子像素目标的影响,可以通过使用高光谱图像数据提供的大量光谱信息来准确地建模和发现目标。在本文中,我们开发了一种简单的有损压缩技术,该技术依赖于光谱分解的概念,这是在高光谱分析中处理混合像素和子像素目标的最流行方法之一。所提出的方法使用两阶段方法,其中首先从输入数据中提取最纯的光谱特征(也称为末端成员),然后将混合像素表示为末端成员的线性组合。在9月11日恐怖袭击发生后,利用NASA喷气推进实验室在纽约市世界贸易中心地区收集的高光谱数据,在实际应用中提供了分析和实验结果。这些数据在这项工作中用于评估使用不同方法进行压缩对频谱特征质量的影响,以准确检测热点火灾。针对提出的有损压缩算法,开发了两种并行实现:在Thunderhead上测试的多处理器实现,在NASA戈达德太空飞行中心的大规模并行Beowulf集群以及在Xilinx Virtex-II FPGA器件上开发的硬件实现。结合起来,这些部分就将并行数据压缩技术整合到现实的高光谱成像问题中的潜在挑战和新兴挑战提供了周到的见解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号