首页> 外文会议>Conference on Satellite data compression, communications, and archiving >Clusters versus FPGAs for Spectral Mixture Analysis-Based Lossy Hyperspectral Data Compression
【24h】

Clusters versus FPGAs for Spectral Mixture Analysis-Based Lossy Hyperspectral Data Compression

机译:簇与FPGA用于光谱混合分析的损耗高光谱数据压缩

获取原文

摘要

The increasing number of airborne and satellite platforms that incorporate hyperspectral imaging spectrometers has soon created the need for efficient storage, transmission and data compression methodologies. In particular, hyperspectral data compression is expected to play a crucial role in many remote sensing applications. Many efforts have been devoted to designing and developing lossless and lossy algorithms for hyperspectral imagery. However, most available lossy compres sion approaches have largely overlooked the impact of mixed pixels and subpixel targets, which can be accurately modeled and uncovered by resorting to the wealth of spectral information provided by hyperspectral image data. In this paper, we develop a simple lossy compression technique which relies on the concept of spectral unmixing, one of the most popular approaches to deal with mixed pixels and subpixel targets in hyperspectral analysis. The proposed method uses a two-stage approach in which the purest spectral signatures (also called endmembers) are first extracted from the input data, and then used to express mixed pixels as linear combinations of endmembers. Analytical and experimental results are presented in the context of a real application, using hyperspectral data collected by NASA's Jet Propulsion Laboratory over the World Trade Center area in New York City, right after the terrorist attacks of September 11th. These data are used in this work to evaluate the impact of compression using different methods on spectral signature quality for accurate detection of hot spot fires. Two parallel implementations are developed for the proposed lossy compression algorithm: a multiprocessor implementation tested on Thunderhead, a massively parallel Beowulf cluster at NASA's Goddard Space Flight Center, and a hardware implementation developed on a Xilinx Virtex-II FPGA device. Combined, these parts offer a thoughtful perspective on the potential and emerging challenges of incorporating parallel data compression techniques into realistic hyperspectral imaging problems.
机译:包含高光谱成像光谱仪的空降和卫星平台的数量越来越多,很快就需要有效存储,传输和数据压缩方法。特别是,预计高光谱数据压缩将在许多遥感应用中发挥至关重要的作用。为高光谱图像设计和开发无损和有损算法的许多努力。然而,最可获得的损失包括SiON方法在很大程度上被忽略了混合像素和子像素目标的影响,这可以通过借助高光谱图像数据提供的频谱信息来精确建模和揭示。在本文中,我们开发了一种简单的损失压缩技术,依赖于谱解密的概念,最受处理混合像素的最受欢迎的方法之一,并在高光谱分析中处理混合像素和子像素靶。所提出的方法使用两级方法,其中首先从输入数据中提取最纯频谱签名(也称为终端用),然后用来将混合像素表达为终端的线性组合。分析和实验结果在实际应用中介绍,使用NASA在纽约市世界贸易中心地区的NASA喷气机推进实验室收集的高光谱数据,于9月11日的恐怖袭击之后。在这项工作中使用这些数据来评估压缩的影响,使用不同方法对光谱签名质量进行精确检测热点火灾。为该提出的损耗压缩算法开发了两个并行实现:在NASA的戈达德航天飞行中心的大规模平行Beowulf集群上测试了在雷霆上测试的多处理器实现,以及在Xilinx Virtex-II FPGA设备上开发的硬件实现。合并,这些零部件对潜在和新出现的挑战结合到实际高光谱成像问题的潜在和新出现的挑战。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号