首页> 外文OA文献 >Cache design strategies for efficient adaptive line placement
【2h】

Cache design strategies for efficient adaptive line placement

机译:高速缓存设计策略可实现高效的自适应行放置

摘要

Efficient memory hierarchy design is critical due to the large difference between the speed of the processors and the memory. In this context, cache memories play a crucial role bridging this gap. Cache management has become even more significant due to the appearance of chip multiprocessors (CMPs), which imply larger memory bandwidth requirements and greater working sets of many emerging applications, and which also need a fair and efficient distribution of cache resources between the cores in a single chip.This dissertation aims to analyze some of the problems commonly found in modern caches and to propose cost-effective solutions to improve their performance. Most of the approaches proposed in this Thesis reduce cache miss rates by taking advantage of the different levels of demand cache sets may experience. This way, lines are placed in underutilized cache blocks of other cache sets if they are likely to be reused in the near future and there is no enough space in their native cache set. When this does not suffice, this dissertation proposes to modify in a coordinated way the insertion policies of oversubscribed sets. Hence, our proposals retain the most useful part of the working set in the cache while discarding temporary data as soon as possible. These ideas, initially developed in the context of last-level caches (LLCs) in single core systems, are successfully adapted in this Thesis to first-level caches and multicore systems. Regarding first-level caches, a novel design that allows to dynamically allocate banks to the instruction or the data cache depending on their degree of pressure is presented. As for multicore systems, our designs are firstly provided with thread-awareness in shared caches in order to give a particular treatment to each stream of requests depending on its owner. Finally, we explore the sharing of resources by means of the spilling of lines among private LLCs in CMPs using several innovative features such as a neutral state, which prevents caches from taking part in the spilling mechanism if this could be harmful, variable granularities for the management of the caches, or the coordinated management of the cache insertion policy. Throughout this process we have used a simple and cost-effective metric to track the state of each cache set called Set Saturation Level (SSL). Finally, it is worthy to point out that our approaches are very competitive and often outperform many of the most recent techniques in the field, despite they imply really small storage and power consumption overheads.
机译:由于处理器速度与内存之间的巨大差异,有效的内存层次结构设计至关重要。在这种情况下,高速缓存在弥合这一差距方面起着至关重要的作用。由于芯片多处理器(CMP)的出现,缓存管理变得更加重要,这意味着更大的内存带宽需求和许多新兴应用程序的更大工作集,并且还需要公平高效地在一个内核之间分配缓存资源。本文旨在分析现代缓存中常见的一些问题,并提出具有成本效益的解决方案以提高其性能。本文中提出的大多数方法都是通过利用需求缓存集可能遇到的不同级别来降低缓存丢失率。这样,如果行可能在不久的将来重用,并且本机高速缓存集中没有足够的空间,则会将其放置在其他高速缓存集的未充分利用的高速缓存块中。当这还不够时,本论文提出以协调的方式修改超额集的插入策略。因此,我们的建议将工作集中最有用的部分保留在缓存中,同时尽快丢弃临时数据。这些想法最初是在单核系统中的最后一级缓存(LLC)的上下文中提出的,在本论文中成功地适应了第一级缓存和多核系统。关于第一级高速缓存,提出了一种新颖的设计,该设计允许根据存储体的压力程度为指令或数据高速缓存动态分配存储体。对于多核系统,我们的设计首先在共享缓存中提供了线程意识,以便根据其所有者对每个请求流进行特殊处理。最后,我们使用中性状态等创新性功能,通过CMP中CMP的私有LLC之间的线路溢出来探索资源共享,这可以防止高速缓存参与溢出机制,如果这可能是有害的,可变的粒度。缓存的管理或缓存插入策略的协调管理。在整个过程中,我们使用了一种简单且经济高效的指标来跟踪每个称为“设置饱和度(SSL)”的缓存集的状态。最后,值得指出的是,尽管我们的方法具有非常小的存储和功耗开销,但它们的竞争性很强,并且通常优于本领域的许多最新技术。

著录项

  • 作者

    Rolán García Dyer;

  • 作者单位
  • 年度 2012
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号