...
首页> 外文期刊>Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of >Efficient Implementation of Hyperspectral Anomaly Detection Techniques on GPUs and Multicore Processors
【24h】

Efficient Implementation of Hyperspectral Anomaly Detection Techniques on GPUs and Multicore Processors

机译:在GPU和多核处理器上高效实现高光谱异常检测技术

获取原文
获取原文并翻译 | 示例

摘要

Anomaly detection is an important task for hyperspectral data exploitation. Although many algorithms have been developed for this purpose in recent years, due to the large dimensionality of hyperspectral image data, fast anomaly detection remains a challenging task. In this work, we exploit the computational power of commodity graphics processing units (GPUs) and multicore processors to obtain implementations of a well-known anomaly detection algorithm developed by Reed and Xiaoli (RX algorithm), and a local (LRX) variant, which basically consists in applying the same concept to a local sliding window centered around each image pixel. LRX has been shown to be more accurate to detect small anomalies but computationally more expensive than RX. Our interest is focused on improving the computational aspects, not only through efficient parallel implementations, but also by analyzing the mathematical issues of the method and adopting computationally inexpensive solvers. Futhermore, we also assess the energy consumption of the newly developed parallel implementations, which is very important in practice. Our optimizations (based on software and hardware techniques) result in a significant reduction of execution time and energy consumption, which are keys to increase the practical interest of the considered algorithms. Indeed, for RX, the runtime obtained is less than the data acquisition time when real hyperspectral images are used. Our experimental results also indicate that the proposed optimizations and the parallelization techniques can significantly improve the general performance of the RX and LRX algorithms while retaining their anomaly detection accuracy.
机译:异常检测是高光谱数据开发的重要任务。尽管近年来已经为此目的开发了许多算法,但是由于高光谱图像数据的维数大,因此快速异常检测仍然是一项艰巨的任务。在这项工作中,我们利用商品图形处理单元(GPU)和多核处理器的计算能力来获得由Reed和Xiaoli开发的著名异常检测算法(RX算法)和本地(LRX)变体的实现。基本上是将相同的概念应用于以每个图像像素为中心的局部滑动窗口。 LRX已显示出检测微小异常的准确性更高,但在计算上比RX昂贵。我们的兴趣集中在改进计算方面,不仅通过有效的并行实现,而且还通过分析该方法的数学问题并采用计算上便宜的求解器。此外,我们还评估了新开发的并行实现的能耗,这在实践中非常重要。我们的优化(基于软件和硬件技术)可显着减少执行时间和能耗,这是提高所考虑算法的实用价值的关键。实际上,对于RX,所获得的运行时间少于使用实际高光谱图像时的数据采集时间。我们的实验结果还表明,所提出的优化和并行化技术可以显着改善RX和LRX算法的总体性能,同时保持其异常检测精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号