首页> 外文学位 >Estimation-theoretic framework for scalability and packet loss resilience in predictive video coding.
【24h】

Estimation-theoretic framework for scalability and packet loss resilience in predictive video coding.

机译:用于预测视频编码中可伸缩性和丢包弹性的估计理论框架。

获取原文
获取原文并翻译 | 示例

摘要

This dissertation proposes an Estimation-Theoretic (ET) framework which enables scalability and packet loss resilience in predictive video coders, while maintaining high compression efficiency. The fundamental premise of ET framework is that optimal estimation, given all the available information, is the ultimate process that should underly compression. While this framework leads to several new solutions to long-standing problems in the predictive source coding, its most important application is in the field of standard DCT-based predictive video coding.; First, an ET approach is derived for enhancement-layer prediction in a generic scalable coder. The resulting ET prediction is shown to be optimal in the sense that it minimizes the mean squared prediction error given all the information available at the enhancement layer. The performance of the ET predictor is demonstrated on scalable DPCM coding of Markov sequences. The method is then adopted for predictive-DCT based scalable coding of video, where considerable performance gains are demonstrated. Further, ET prediction is used to improve the bit rate scalability of two-channel audio coding.; As further applications, the use of ET prediction for robust video compression is demonstrated in two settings. Multiple description video coding, an important tool for packet loss resilience, is shown to benefit from optimal prediction within the ET framework.; The second line of research uses the ET framework to address the problem of error propagation and packet loss resilience in predictive video coders. Here, the objective is to enable the design of efficient Joint Source-Channel Coding (JSCC) schemes which minimize the total decoder distortion, for the given rate and channel loss condition. Towards this goal, a Recursive Optimal per-Pixel Estimate (ROPE) of the expected total decoder distortion is derived.; The use of ROPE for parameter optimization in JSCC video coders leads to significant performance gains. This is illustrated using the important error resilience tool of macroblock (MB) coding mode selection. The resulting technique, referred to as ROPE-RD, achieves substantial PSNR gains over widely used RD and nonRD based mode-switching methods. The ROPE-RD algorithm is then extended to incorporate feedback information from the receiver. Simulation results show that ROPE-based mode selection substantially outperforms conventional prediction mode selection schemes.; Finally, the combination of the ET-prediction based error concealment method at the decoder, and ROPE-based mode selection at the encoder, is shown to further improve the packet loss resilience of scalable video coders. (Abstract shortened by UMI.)
机译:本文提出了一种估计理论框架,该框架能够在预测视频编码器中实现可扩展性和丢包弹性,同时保持较高的压缩效率。 ET框架的基本前提是,在获得所有可用信息的情况下,最佳估计是应进行压缩的最终过程。尽管该框架为预测源编码中的长期问题带来了几种新的解决方案,但其最重要的应用是在基于DCT的标准预测视频编码领域。首先,在通用可伸缩编码器中导出用于增强层预测的ET方法。在给定所有在增强层可用的信息的情况下,所得的ET预测在使最小化均方预测误差的意义上显示为最佳。 ET预测器的性能在马尔可夫序列的可伸缩DPCM编码上得到了证明。然后将该方法用于基于预测DCT的视频可伸缩编码,在其中证明了可观的性能提升。此外,ET预测被用于改善两声道音频编码的比特率可伸缩性。作为进一步的应用,在两种设置中演示了将ET预测用于鲁棒的视频压缩。多描述视频编码是丢包弹性的重要工具,它被证明可以从ET框架内的最佳预测中受益。研究的第二条线使用ET框架来解决预测视频编码器中的错误传播和丢包弹性的问题。在此,目标是实现针对给定速率和信道损耗条件的有效联合源信道编码(JSCC)方案的设计,该方案将总解码器失真降至最低。为了达到这个目标,推导了预期总解码器失真的递归最优每像素估计(ROPE)。在JSCC视频编码器中使用ROPE进行参数优化可显着提高性能。使用宏块(MB)编码模式选择的重要错误恢复工具可以说明这一点。与广泛使用的基于RD和非RD的模式切换方法相比,这种称为ROPE-RD的技术可实现可观的PSNR增益。然后扩展ROPE-RD算法以合并来自接收器的反馈信息。仿真结果表明,基于ROPE的模式选择明显优于传统的预测模式选择方案。最后,显示了在解码器处基于ET预测的错误隐藏方法和编码器处基于ROPE的模式选择的组合,可进一步提高可伸缩视频编码器的丢包弹性。 (摘要由UMI缩短。)

著录项

  • 作者

    Regunathan, Shankar L.;

  • 作者单位

    University of California, Santa Barbara.;

  • 授予单位 University of California, Santa Barbara.;
  • 学科 Engineering Electronics and Electrical.
  • 学位 Ph.D.
  • 年度 2001
  • 页码 103 p.
  • 总页数 103
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 无线电电子学、电信技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号