首页> 外文会议>IEEE Workshop on Multimedia Signal Processing >Selective Use Of Multiple Entropy Models In Audio Coding
【24h】

Selective Use Of Multiple Entropy Models In Audio Coding

机译:在音频编码中选择性地使用多个熵模型

获取原文
获取外文期刊封面目录资料

摘要

The use of multiple entropy models for Huffman or arithmetic coding is widely used to improve the compression efficiency of many algorithms when the source probability distribution varies. However, the use of multiple entropy models increases the memory requirements of both the encoder and decoder significantly. In this paper, we present an algorithm which maintains almost all of the compression gains of multiple entropy models for only a very small increase in memory over one which uses a single entropy model. This can be used for any entropy coding scheme such as Huffman or arithmetic coding. This is accomplished by employing multiple entropy models only for the most probable symbols and using fewer entropy models for the less probable symbols. We show that this algorithm reduces the audio coding bitrate by 5%-8% over an existing algorithm which uses the same amount of table memory by allowing effective switching of the entropy model being used as source statistics change over an audio transform block.
机译:使用多个熵模型用于扫枪或算术编码的广泛用于提高许多算法分布变化时许多算法的压缩效率。然而,使用多个熵模型显着增加了编码器和解码器的存储器要求。在本文中,我们提出了一种算法,该算法仅在使用单个熵模型的一个内存中的存储器中的几乎非常小的内存增加,保持多个熵模型的所有压缩增益。这可以用于任何熵编码方案,例如霍夫曼或算术编码。这是通过仅为最可能符号使用多个熵模型并使用较少的熵模型来实现的。我们表明,通过允许在音频变换块上使用用作源统计的熵模型的有效切换,通过允许使用相同量的表存储器的现有算法将音频编码比特率降低5%-8%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号