首页> 外文会议> >Design and implementation of high-performance memory systems for future packet buffers
【24h】

Design and implementation of high-performance memory systems for future packet buffers

机译:面向未来数据包缓冲区的高性能存储系统的设计和实现

获取原文

摘要

In this paper, we address the design of a future high-speed router that supports line rates as high as OC-3072 (160 Gb/s), around one hundred ports and several service classes. Building such a high-speed router would raise many technological problems, one of them being the packet buffer design, mainly because in router design it is important to provide worst-case bandwidth guarantees and not just average-case optimizations. A previous packet buffer design provides worst-case bandwidth guarantees by using a hybrid SRAM/DRAM approach. Next-generation routers need to support hundreds of interfaces (i.e., ports and service classes). Unfortunately, high bandwidth for hundreds of interfaces requires the previous design to use large SRAMs which become a bandwidth bottleneck. The key observation we make is that the SRAM size is proportional to the DRAM access time but we can reduce the effective DRAM access time by overlapping multiple accesses to different banks, allowing us to reduce the SRAM size. The key challenge is that to keep the worst-case bandwidth guarantees, we need to guarantee that there are no bank conflicts while the accesses are in flight. We guarantee bank conflicts by reordering the DRAM requests using a modern issue-queue-like mechanism. Because our design may lead to fragmentation of memory across packet buffer queues, we propose to share the DRAM space among multiple queues by renaming the queue slots. To the best of our knowledge, the design proposed in this paper is the fastest buffer design using commodity DRAM to be published to date.
机译:在本文中,我们着眼于未来的高速路由器的设计,该路由器将支持高达OC-3072(160 Gb / s)的线速,大约一百个端口和几种服务等级。构建这样的高速路由器会引发许多技术问题,其中之一就是数据包缓冲区设计,这主要是因为在路由器设计中,提供最坏情况的带宽保证非常重要,而不仅仅是平均情况下的优化。先前的数据包缓冲区设计通过使用混合SRAM / DRAM方法提供了最坏情况的带宽保证。下一代路由器需要支持数百个接口(即端口和服务类别)。不幸的是,数百个接口的高带宽要求以前的设计使用大型SRAM,这成为带宽瓶颈。我们所做的主要观察是,SRAM的大小与DRAM的访问时间成正比,但我们可以通过将对不同存储体的多次访问重叠来减少DRAM的有效访问时间,从而可以减小SRAM的大小。关键的挑战是要保持最坏情况下的带宽保证,我们需要保证在访问过程中不存在库冲突。我们使用类似现代问题队列的机制对DRAM请求进行重新排序,从而保证了银行冲突。因为我们的设计可能会导致跨数据包缓冲区队列的内存碎片,所以我们建议通过重命名队列插槽在多个队列之间共享DRAM空间。据我们所知,本文提出的设计是迄今为止使用商用DRAM最快的缓冲器设计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号