首页> 外文会议>IEEE/ACM International Conference on Computer-Aided Design >Exploring cache bypassing and partitioning for multi-tasking on GPUs
【24h】

Exploring cache bypassing and partitioning for multi-tasking on GPUs

机译:探索用于GPU上多任务的缓存绕过和分区

获取原文

摘要

Graphics Processing Units (GPUs) computing has become ubiquitous for embedded system, evidenced by its wide adoption for various general purpose applications. As more and more applications are accelerated by GPUs, multi-tasking scenario starts to emerge. Multi-tasking allows multiple applications to simultaneously execute on the same GPU and share the resource. This brings new challenges due to the contention among the different applications for the shared resources such as caches. However, the caches on GPUs are difficult to use. If used inappropriately, it may hurt the performance instead of improving it. In this paper, we propose to use cache partitioning together with cache bypassing as the shared cache management mechanism for multi-tasking on GPUs. The combined approach aims to reduce the interference among the tasks and preserve the locality for each task. However, the interplay among the cache partitioning and bypassing brings greater challenges. On one hand, the partitioned cache space to each task affects its cache bypassing decision. On the other hand, cache bypassing affects the cache capacity required for each task. To address this, we propose a two-step approach. First, we use cache partitioning to assign dedicated cache space to each task to reduce the interference among the tasks. During this process, we compare cache partitioning with coarse-grained cache bypassing. Then, we use fine-grained cache bypassing to selectively bypass certain data requests and threads for each task. We explore different cache partitioning and bypassing designs and demonstrate the potential benefits of this approach. Experiments using a wide range of applications demonstrate that our technique improves the overall system throughput by 52% on average compared to the default multi-tasking solution on GPUs.
机译:图形处理单元(GPU)计算已在嵌入式系统中变得无处不在,这一点已得到广泛应用于各种通用应用程序的证明。随着越来越多的应用程序通过GPU加速,多任务方案开始出现。多任务允许多个应用程序在同一GPU上同时执行并共享资源。由于不同应用程序之间对诸如缓存之类的共享资源的争用,这带来了新的挑战。但是,GPU上的缓存很难使用。如果使用不当,可能会损害性能而不是提高性能。在本文中,我们建议将缓存分区与缓存绕过一起作为共享的缓存管理机制用于GPU上的多任务处理。组合方法旨在减少任务之间的干扰并保留每个任务的局部性。但是,缓存分区和旁路之间的相互作用带来了更大的挑战。一方面,每个任务的分区缓存空间会影响其缓存绕过决策。另一方面,缓存绕过会影响每个任务所需的缓存容量。为了解决这个问题,我们提出了两步法。首先,我们使用缓存分区为每个任务分配专用的缓存空间,以减少任务之间的干扰。在此过程中,我们将缓存分区与粗粒度缓存绕过进行了比较。然后,我们使用细粒度的缓存绕过来有选择地绕过某些数据请求和每个任务的线程。我们探索了不同的缓存分区和旁路设计,并演示了这种方法的潜在好处。使用各种应用程序进行的实验表明,与GPU上默认的多任务解决方案相比,我们的技术平均将整体系统吞吐量提高了52%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号