首页> 外文期刊>Journal of supercomputing >A memory scheduling strategy for eliminating memory access interference in heterogeneous system
【24h】

A memory scheduling strategy for eliminating memory access interference in heterogeneous system

机译:消除异构系统内存访问干扰的存储器调度策略

获取原文
获取原文并翻译 | 示例

摘要

Multiple CPUs and GPUs are integrated on the same chip to share memory, and access requests between cores are interfering with each other. Memory requests from the GPU seriously interfere with the CPU memory access performance. Requests between multiple CPUs are intertwined when accessing memory, and its performance is greatly affected. The difference in access latency between GPU cores increases the average latency of memory accesses. In order to solve the problems encountered in the shared memory of heterogeneous multi-core systems, we propose a step-by-step memory scheduling strategy, which improve the system performance. The step-by-step memory scheduling strategy first creates a new memory request queue based on the request source and isolates the CPU requests from the GPU requests when the memory controller receives the memory request, thereby preventing the GPU request from interfering with the CPU request. Then, for the CPU request queue, a dynamic bank partitioning strategy is implemented, which dynamically maps it to different bank sets according to different memory characteristics of the application, and eliminates memory request interference of multiple CPU applications without affecting bank-level parallelism. Finally, for the GPU request queue, the criticality is introduced to measure the difference of the memory access latency between the cores. Based on the first ready-first come first served strategy, we implemented criticality-aware memory scheduling to balance the locality and criticality of application access.
机译:多个CPU和GPU集成在同一芯片上以共享存储器,并且核之间的访问请求彼此干扰。来自GPU的内存请求严重干扰CPU内存访问性能。访问内存时,多个CPU之间的请求交织在一起,其性能很大。 GPU核心之间的访问延迟的差异增加了内存访问的平均延迟。为了解决异构多核系统共享内存中遇到的问题,我们提出了一种逐步的内存调度策略,提高了系统性能。逐步存储器调度策略首先基于请求源创建新的存储器请求队列,并在存储器控制器接收到存储器请求时将CPU请求隔离,从而防止GPU请求干扰CPU请求。然后,对于CPU请求队列,实现了动态银行分区策略,其根据应用的不同存储器特性将其动态地将其映射到不同的存储体集,并且消除了多个CPU应用的存储器请求干扰而不影响库级并行性。最后,对于GPU请求队列,引入了临界性以测量核心之间的存储器访问等待时间的差异。基于第一个准备好的来到第一个服务的策略,我们实现了关键性感知内存调度,以平衡应用程序访问的局部性和临界性。

著录项

  • 来源
    《Journal of supercomputing》 |2020年第4期|3129-3154|共26页
  • 作者单位

    Beijing Univ Technol Fac Informat Technol Beijing 100124 Peoples R China;

    Beijing Univ Technol Fac Informat Technol Beijing 100124 Peoples R China;

    Beijing Univ Technol Fac Informat Technol Beijing 100124 Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Heterogeneous multi-core; Shared memory; Memory scheduling;

    机译:异构多核;共享内存;内存调度;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号