首页> 外文期刊>Microprocessors and microsystems >A scalable multi-porting solution for future wide-issue processors
【24h】

A scalable multi-porting solution for future wide-issue processors

机译:可扩展的多端口解决方案,适用于未来的广泛问题处理器

获取原文
获取原文并翻译 | 示例

摘要

Wide-issue processors issuing tens of instructions per cycle, put heavy stress on the memory system, including data caches. For wide-issue architectures, the data cache needs to be heavily multi-ported with extremely wide data-paths. This paper studies a scalable solution to achieve multi-porting with short data-paths and less hardware complexity at higher clock-rates. Our approach divides memory streams into multiple independent sub-streams with the help of a prediction mechanism before they enter the reservation stations. Partitioned memory-reference instructions are then fed into separate memory pipelines, each of which is connected to a small data-cache, called access region cache. The separation of independent memory references, in an ideal situation, facilitates the use of multiple caches with smaller number of ports and thus increases the data-bandwidth. We describe and evaluate a wide-issue processor with distinct memory pipelines, driven by a prediction mechanism. The potential performance of the proposed design is measured by comparing it with existing multi-porting solutions as well as an ideal multi-ported data cache.
机译:发行量大的处理器每个周期发出数十条指令,这给存储系统(包括数据缓存)带来了沉重的压力。对于问题广泛的体系结构,数据高速缓存需要使用非常宽的数据路径在很大程度上进行多端口移植。本文研究了一种可扩展的解决方案,可在较短的数据路径下以较高的时钟速率实现多端口传输,并降低硬件复杂性。我们的方法借助一种预测机制将内存流分成多个独立的子流,然后再进入保留站。然后,将分区的内存引用指令馈送到单独的内存管道中,每个内存管道都连接到称为访问区域缓存的小型数据缓存中。在理想情况下,独立内存引用的分离有助于使用端口数量较少的多个缓存,从而增加数据带宽。我们描述并评估由预测机制驱动的具有不同内存流水线的宽问题处理器。通过将其与现有的多端口解决方案以及理想的多端口数据缓存进行比较,可以评估所提出设计的潜在性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号