首页> 外文会议>International Conference on Big Data and Smart Computing >Massive parallelization technique for random linear network coding
【24h】

Massive parallelization technique for random linear network coding

机译:用于随机线性网络编码的大规模并行化技术

获取原文

摘要

Random linear network coding (RLNC) has gain popularity as a useful performance-enhancing tool for communications networks. In this paper, we propose a RLNC parallel implementation technique for General Purpose Graphical Processing Units (GPGPUs.) Recently, GPGPU technology has paved the way for parallelizing RLNC; however, current state-of-the-art parallelization techniques for RLNC are unable to fully utilize GPGPU technology in many occasions. Addressing this problem, we propose a new RLNC parallelization technique that can fully exploit GPGPU architectures. Our parallel method shows over 4 times higher throughput compared to existing state-of-the-art parallel RLNC decoding schemes for GPGPU and 20 times higher throughput over the state-of-the-art serial RLNC decoders.
机译:随机线性网络编码(RLNC)具有普及作为通信网络的有用性能增强工具。在本文中,我们提出了一种用于通用图形处理单元(GPGPU)的RLNC并行实现技术,最近,GPGPU技术已经为平行化RLNC铺平了道路;然而,RLNC的当前最先进的并行化技术在许多场合中无法充分利用GPGPU技术。解决这个问题,我们提出了一种新的RLNC并行化技术,可以完全利用GPGPU架构。与GPGPU的现有最新的并行RLNC解码方案相比,我们的并行方法显示了超过4倍的吞吐量,用于GPGPU和最先进的串行RLNC解码器的吞吐量高出20倍的吞吐量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号