首页> 外文会议>IEEE International Symposium on High Performance Computer Architecture >Thread block compaction for efficient SIMT control flow
【24h】

Thread block compaction for efficient SIMT control flow

机译:线程块压缩以获得高效的SIMT控制流程

获取原文

摘要

Manycore accelerators such as graphics processor units (GPUs) organize processing units into single-instruction, multiple data “cores” to improve throughput per unit hardware cost. Programming models for these accelerators encourage applications to run kernels with large groups of parallel scalar threads. The hardware groups these threads into warps/wavefronts and executes them in lockstep-dubbed single-instruction, multiple-thread (SIMT) by NVIDIA. While current GPUs employ a per-warp (or per-wavefront) stack to manage divergent control flow, it incurs decreased efficiency for applications with nested, data-dependent control flow. In this paper, we propose and evaluate the benefits of extending the sharing of resources in a block of warps, already used for scratchpad memory, to exploit control flow locality among threads (where such sharing may at first seem detrimental). In our proposal, warps within a thread block share a common block-wide stack for divergence handling. At a divergent branch, threads are compacted into new warps in hardware. Our simulation results show that this compaction mechanism provides an average speedup of 22% over a baseline per-warp, stack-based reconvergence mechanism, and 17% versus dynamic warp formation on a set of CUDA applications that suffer significantly from control flow divergence.
机译:Manycore加速器,如图形处理器单元(GPU)将处理单元组织成单指令,多个数据“核心”,以提高每单位硬件成本的吞吐量。这些加速器的编程模型鼓励应用程序以大量的并行标量线路运行内核。硬件将这些线程组分组成Warps /波前,并通过NVIDIA在LockStep-Dubbed单指令,多线程(SIMT)中执行它们。虽然当前GPU采用每扭曲(或每波前)堆栈来管理发散控制流程,但它会导致嵌套数据相关的控制流程的应用效率降低。在本文中,我们提出并评估了在已经用于Scratchpad Memory的经线块中扩展资源共享的好处,以利用线程之间的控制流动局部(在此期初可能是有害的。在我们的建议中,线程块内的扭曲共享一个常见的块宽堆栈,用于分歧处理。在发散的分支中,线程被压实为硬件的新经线。我们的仿真结果表明,这种压实机制在基线每经,堆栈的重新验机制,堆栈的重新调节机制的平均加速度为22%,而且在一组CUDA应用中,17%对动态翘曲形成,这些应用程序显着地从控制流偏差遭受显着遭受显着影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号