首页> 外文期刊>Journal of systems architecture >Exploring the performance of split data cache schemes on superscalar processors and symmetric multiprocessors
【24h】

Exploring the performance of split data cache schemes on superscalar processors and symmetric multiprocessors

机译:探索超标量处理器和对称多处理器上拆分数据缓存方案的性能

获取原文
获取原文并翻译 | 示例
       

摘要

Current technology continues providing smaller and faster transistors, so processor architects can offer more complex and functional ILP processors, because manufacturers can fit more transistors on the same chip area. As a consequence, the fraction of chip area reachable in a single clock cycle is dropping, and at the same time the number of transistors on the chip is increasing. However, problems related with power consumption and heat dissipation are worrying. This scenario is forcing processor designers to look for new processor organizations that can provide the same or more performance but using smaller sizes. This fact especially affects the on-chip cache memory design; therefore, studies proposing new smaller cache organizations while maintaining, or even increasing, the hit ratio are welcome. In this sense, the cache schemes that propose a better exploitation of data locality (bypassing schemes, prefetching techniques, victim caches, etc.) are a good example.This paper presents a data cache scheme called filter cache that splits the first level data cache into two independent organizations.. and its performance is compared with two other proposals appearing in the open literature, as well as larger classical caches. To check the performance two different scenarios are considered: a superscalar processor and a symmetric multiprocessor.The obtained results show that (i) in the superscalar processor the split data caches perform similarly or better than larger conventional caches, (ii) some splitting schemes work well in multiprocessors while others work less well because of data localities, (iii) the reuse information that some split schemes incorporate for managing is also useful for designing new competitive protocols to boost performance in multiprocessors, (iv) the filter data cache achieves the best performance in both scenarios. (c) 2005 Elsevier B.V. All rights reserved.
机译:当前的技术继续提供更小,更快的晶体管,因此处理器制造商可以提供更复杂和功能更强大的ILP处理器,因为制造商可以在同一芯片面积上安装更多的晶体管。结果,在单个时钟周期内可达到的芯片面积分数正在下降,与此同时,芯片上的晶体管数量也在增加。然而,与功耗和散热有关的问题令人担忧。这种情况迫使处理器设计人员寻找可以提供相同或更高性能但使用更小尺寸的新处理器组织。这个事实尤其会影响片上高速缓存的设计。因此,欢迎在保持甚至提高命中率的同时提出新的小型缓存组织的研究。从这个意义上讲,提出更好地利用数据局部性的高速缓存方案(绕过方案,预取技术,受害者高速缓存等)就是一个很好的例子。本文提出了一种称为过滤器高速缓存的数据高速缓存方案,该方案将第一级数据高速缓存分开分成两个独立的组织..并将其性能与公开文献中出现的其他两个建议以及更大的经典缓存进行比较。为了检查性能,考虑了两种不同的情况:超标量处理器和对称多处理器。获得的结果表明:(i)在超标量处理器中,拆分数据缓存的性能与较大的常规缓存相似或更好,(ii)一些拆分方案有效(iii)一些拆分方案合并用于管理的重用信息对于设计新的竞争协议以提高多处理器的性能也很有用,(iv)过滤器数据缓存达到最佳效果在两种情况下的性能。 (c)2005 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号