首页> 中文期刊> 《计算机工程》 >基于GPU的H.264并行解码算法

基于GPU的H.264并行解码算法

         

摘要

In terms of parallel decoding H.264 video stream problems, this paper builds CPU/GPU cooperative computing model to accelerate video encoding and decoding computing. This model uses Compute Unified Device Architecture(CUDA) language as GPU programming model, proposes and implements DCT inverse conversation and intra-frame prediction in a GPU accelerated computing. In the premise of maintaining higher calculation accuracy, combined with CUDA mixed programming, improves the computational performance of the system greatly. The algorithm uses CUDA language provided by NVIDIA, and realizes the DCT inverse conversation and intra-frame prediction on GPU. The experiment compares the parallel algorithm and the sole CPU, and verifies the accelerating effect of the parallel decoding algorithm by using different number of video streams. Experimental result shows that this system improves the video streaming codec efficiency, and it can accelerate 10 times faster than the average CPU sole calculation.%针对并行处理H.264标准视频流解码问题,提出基于CPU/GPU的协同运算算法。以统一设备计算架构(CUDA)语言作为GPU编程模型,实现DCT逆变换与帧内预测在GPU中的加速运算。在保持较高计算精度的前提下,结合CUDA混合编程,提高系统的计算性能。利用NIVIDIA提供的CUDA语言,在解码过程中使DCT逆变换和帧内预测在GPU上并行实现,将并行算法与CPU 单机实现进行比较,并用不同数量的视频流验证并行解码算法的加速效果。实验结果表明,该算法可大幅提高视频流的编解码效率,比CPU单机的平均计算加速比提高10倍。

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号