首页> 外文会议>Symposium on Signal Processing, Images and Computer Vision >Accelerating huffman decoding of seismic data on GPUs
【24h】

Accelerating huffman decoding of seismic data on GPUs

机译:加速GPU上地震数据的Huffman解码

获取原文

摘要

Huffman coding algorithm is widely used for seismic data compression because it offers good performance in terms of compression ratio. The algorithm compresses the data by assigning shorter code-words to the most frequent symbols, while the other symbols have longer code-words. It is difficult to accelerate the decoding process by exploiting the parallel architectures because the variable length of the code-words makes this process highly sequential. We propose a strategy that uses packets with headers to save the encoded data. This strategy forces the alignment of code-words at packet boundaries, allowing us to parallelize the decoding process. The parallel Huffman decoder was implemented on a GeForce GTX660 GPU and tested using different seismic datasets supplied by an oil company. Comparisons in terms of throughput (i.e. decoded data per second) suggest that our work is superior to other implementations. Experimental results allowed us to establish how the proposed strategy affects the compression ratio and how the number of threads per block affects the performance of the algorithm. Additionally, we show how the throughput is related with the compression ratio.
机译:霍夫曼编码算法广泛用于地震数据压缩,因为它在压缩比方面提供了良好的性能。该算法通过将更短的代码单词分配给最常见的符号来压缩数据,而另一个符号具有更长的代码单词。难以利用并行架构来加速解码过程,因为代码单的可变长度使得该过程高度顺序。我们提出了一种使用标题的数据包来保存编码数据的策略。该策略强制对齐代码单词在数据包边界处的对齐,允许我们并行化解码过程。并行霍夫曼解码器在GeForce GTX660 GPU上实现并使用石油公司提供的不同地震数据集进行测试。吞吐量的比较(即每秒解码数据)表明我们的工作优于其他实现。实验结果允许我们建立所提出的策略如何影响压缩比以及每个块的线程数如何影响算法的性能。此外,我们展示了吞吐量如何与压缩比相关。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号