首页> 美国政府科技报告 >Can High Bandwidth and Latency Justify Large Cache Blocks in ScalableMultiprocessors
【24h】

Can High Bandwidth and Latency Justify Large Cache Blocks in ScalableMultiprocessors

机译:高带宽和延迟可以证明scalablemultiprocessors中的大缓存块

获取原文

摘要

An important architectural design decision affecting the performance of coherentcaches in shared-memory multiprocessors is the choice of block size. There are two primary factors that influence this choice: the reference behavior of application programs and the remote access bandwidth and latency of the machine. Several studies have shown that increasing the block size can lower the miss rate and reduce the number of invalidations. However, increasing the block size can also increase the miss rate by, for example, increasing false sharing or the number of cache evictions. Large cache blocks can also generate network contention. Given that we anticipate enormous increases in both network bandwidth and latency in large-scale, shared-memory multiprocessors, the question arises as to what effect these increases will have on the choice of block size. We use analytical modeling and execution-driven simulation of parallel programs on a large-scale shared-memory machine to examine the relationship between cache block size and application performance as a function of remote access bandwidth and latency. We show that even under assumptions of high remote access bandwidth, the best application performance usually results from using cache blocks between 32 and 128 bytes in size.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号