首页> 外文会议>Data Compression Conference >Fast Near-Lossless or Lossless Compression of Large 3D Neuro-Anatomical Images
【24h】

Fast Near-Lossless or Lossless Compression of Large 3D Neuro-Anatomical Images

机译:快速近无损或无损压缩大3D神经解剖图像

获取原文

摘要

3D neuro-anatomical images and other volumetric data sets are important in many scientific and biomedical fields. Since such sets may be extremely large, a scalable compression method is critical to store, process and transmit them. To achieve a high compression rate, most of the existing volume compression methods are lossy, which is usually unacceptable in biomedical applications. Our near-lossless or lossless compression algorithm uses a Hilbert traversal to produce a data stream from the original image. This data stream enjoys relatively slow image context change, which helps the subsequent DPCM prediction to reduce the source entropy. An extremely fast linear DPCM is used. The linear DPCM takes the average of the previous two voxels' intensity to predict the current voxel's intensity. If near-lossless compression is desired, the prediction error can be uniformly quantized. For lossless mode, no action is taken on the prediction error. The prediction error is further encoded using Huffman code. In order to provide efficient data access, the source image is divided into blocks and indexed by an octree data structure. Each sub-volume block has its unique prediction error distribution. To form a Huffman code book of the prediction error for the entire volume is inefficient. On the other hand, to produce a Huffman code book for every sub-volume block also introduces heavy coding overhead. We characterize each block's error distribution as a point in a high-dimensional space and then bin the points using a novel binning method. All the error distributions that fall into the same bin are summed together to form a summed error distribution. We build a Huffman code book for this distribution. The total number of Huffman code books is the number of bins. The coding overhead is therefore effectively reduced. All the sub-volume blocks' prediction error is coded according to its own Huffman code book. Although our compression method is designed for performance-critical digital brain atlas applications, it would be suitable for other applications that require very fast data access without prior decompression and for which a modest compression rate is acceptable. For detail, please see the full length technical report.
机译:3D神经解剖图像和其他体积数据集在许多科学和生物医学领域很重要。由于这样的组可以非常大,因此可伸缩的压缩方法对于存储,处理和发送它们是关键的。为了实现高压缩率,大多数现有的体积压缩方法是有损的,这通常是生物医学应用中的不可接受的。我们的近无损或无损压缩算法使用Hilbert Traversal从原始图像生成数据流。此数据流享有相对较慢的图像上下文变化,这有助于随后的DPCM预测来减少源熵。使用极快的线性DPCM。线性DPCM采用前两种体素强度的平均值来预测当前的体素的强度。如果需要近无损压缩,则可以均匀地量化预测误差。对于无损模式,无法对预测误差进行任何操作。使用霍夫曼代码进一步编码预测误差。为了提供有效的数据访问,源图像被划分为块并由OctREE数据结构索引。每个子卷块具有其独特的预测误差分布。为了形成整个卷的预测误差的霍夫曼代码簿是效率的。另一方面,为每个子卷块生产霍夫曼代码簿也引入了重的编码开销。我们将每个块的错误分布表征为高维空间中的一个点,然后使用新颖的分箱方法箱为点。将其落入相同BIN的所有错误分布总结在一起以形成总和错误分布。我们为此分发构建霍夫曼代码簿。霍夫曼代码书的总数是垃圾箱的数量。因此有效地减少了编码开销。所有子卷块的预测错误根据自己的霍夫曼代码书编码。虽然我们的压缩方法专为性能关键型数字大脑地图集应用而设计,但它适用于在没有先前的减压的情况下需要非常快速的数据访问的其他应用程序,并且可以接受适度的压缩率。有关详细信息,请参阅全长技术报告。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号