首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >Data forwarding in scalable shared-memory multiprocessors
【24h】

Data forwarding in scalable shared-memory multiprocessors

机译:可伸缩共享内存多处理器中的数据转发

获取原文
获取原文并翻译 | 示例

摘要

Scalable shared-memory multiprocessors are often slowed down by long-latency memory accesses. One way to cope with this problem is to use data forwarding to overlap memory accesses with computation. With data forwarding, when a processor produces a datum, in addition to updating its cache, it sends a copy of the datum to the caches of the processors that the compiler identified as consumers of it. As a result, when the consumer processors access the datum, they find it in their caches. This paper addresses two main issues. First, it presents a framework for a compiler algorithm for forwarding. Second, using address traces, it evaluates the performance impact of different levels of support for forwarding. Our simulations of a 32-processor machine show that an optimistic support for forwarding speeds up five applications by an average of 50% for large caches and 30% for small caches. For large caches, most sharing read misses are eliminated, while for small caches, forwarding does not increase the number of conflict misses significantly. Overall, support for forwarding in shared-memory multiprocessors promises to deliver good application speedups.
机译:可扩展的共享内存多处理器通常会因长等待时间的内存访问而变慢。解决此问题的一种方法是使用数据转发将内存访问与计算重叠。通过数据转发,当处理器生成数据时,除了更新其缓存外,它还将数据的副本发送到编译器标识为其使用方的处理器的缓存中。结果,当消费者处理器访问数据时,他们会在其缓存中找到它。本文主要解决两个问题。首先,它介绍了用于转发的编译器算法的框架。其次,它使用地址跟踪来评估不同级别的转发支持对性能的影响。我们对32处理器机器的仿真显示,对转发的乐观支持将五个应用程序的大型缓存平均提高了50%,将小型缓存平均提高了30%。对于大型缓存,将消除大多数共享读取未命中的情况,而对于小型缓存,转发不会显着增加冲突未命中的数目。总体而言,对共享内存多处理器中的转发的支持有望实现良好的应用程序加速。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号