The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. The idea behind RAMpage is to reduce hardware complexity, if at the cost of software complexity, with a view to allowing more flexible memory system design. This paper investigates some issues in choosing between RAMpage and a conventional cache architecture, with a view to illustrating trade-offs which can be made in choosing whether to place complexity in the memory system in hardware or in software. Performance results in this paper are based on a simple Rambus implementation of DRAM, with performance characteristics of Direct Rambus, which should be available in 1999. This paper explores the conditions under which it becomes feasible to perform a context switch on a miss in the RAMpage model, and the conditions under which RAMpage is a win over a conventional cache architecture: as the CPU-DRAM speed gap grows, RAMpage becomes more viable.
展开▼
机译:RAMpage内存层次结构是缓存和主内存之间传统划分的替代方法:主内存上移一级,DRAM用作寻呼设备。RAMpage背后的理念是降低硬件复杂性,即使以软件复杂性为代价,以期实现更灵活的内存系统设计。本文研究了在RAMpage和传统缓存架构之间进行选择时的一些问题,以期说明在选择将复杂性置于硬件或软件的内存系统中时可以做出的权衡。本文中的性能结果基于一个简单的 RAMBUS DRAM 实现,具有 Direct Rambus 的性能特征,该性能特征应于 1999 年推出。本文探讨了在RAMpage模型中对未命中执行上下文切换的条件,以及RAMpage优于传统缓存架构的条件:随着CPU-DRAM速度差距的扩大,RAMpage变得更加可行。
展开▼