首页> 外文学位 >On-chip FIFO cache for network I/O: A feasibility study.
【24h】

On-chip FIFO cache for network I/O: A feasibility study.

机译:用于网络I / O的片上FIFO缓存:可行性研究。

获取原文
获取原文并翻译 | 示例

摘要

The large performance gap between memory system performance and the processor continues to grow in spite of advances in process technologies and memory system architecture. This performance gap is bridged, in general, using a multi-level on-chip cache. Many network applications, particularly server applications use network data in a streaming fashion - the incoming and outgoing data from/to network interfaces are used at most once or twice. Consequently, when the packet contents are accessed via the cache hierarchy, much of these data items remain in the cache long after they are consumed. The resulting cache pollution deprives other applications from effectively using the cache and results in an overall degradation of the processor throughput.;This thesis proposes the use of a separate FIFO cache for holding incoming network data to avoid the pollution of the main processor caches. The proposed design exploits the fact that almost 100% of the incoming packets are accessed by the server within a very short duration after their arrival in a fairly FIFO fashion. The proposed FIFO cache for incoming data streams directly accepts data DMA-ed from the network interface card (NIC) and permits the processing cores to consume the incoming data directly from the FIFO cache. The FIFO includes additional mechanisms for looking up the data for any incoming packets and implements a replacement policy that effectively evicts data that is not accessed in FIFO order and data accessed in FIFO order after two consecutive accesses. The pro-active deletion of sequentially accessed data from the FIFO cache contrasts with the behavior of traditional caches, where data is evicted only when new data has to be brought into the cache. This pro-active deletion policy allows the size of the FIFO cache to be limited. We evaluate the proposed design using a cycle-accurate full system simulator that simulates the execution of the application OS and the networking protocol stacks, with the simulator providing accurate simulation models for the NIC, the DMA infrastructures, the memory system and a multicore processor. Our evaluations demonstrate that the proposed FIFO cache for incoming network data is capable of increasing the overall CPU performance dramatically by simply reducing the cache pollution caused by incoming network data streams.
机译:尽管处理技术和内存系统架构有所进步,但内存系统性能与处理器之间的巨大性能差距仍在继续扩大。通常,使用多级片上缓存可弥合此性能差距。许多网络应用程序,特别是服务器应用程序以流方式使用网络数据-来往于网络接口的传入和传出数据最多使用一次或两次。因此,当通过高速缓存层次结构访问数据包内容时,这些数据项中的许多数据在消耗完很长时间后仍保留在高速缓存中。由此产生的缓存污染使其他应用程序无法有效使用缓存,并导致处理器吞吐量的整体下降。;本文提出了使用单独的FIFO缓存来保存传入的网络数据,以避免污染主处理器缓存。提出的设计利用了以下事实:服务器在以相当先进先出的方式到达后,在很短的时间内就访问了几乎100%的传入数据包。提议的用于传入数据流的FIFO缓存直接接受来自网络接口卡(NIC)的DMA数据,并允许处理核心直接从FIFO缓存中消耗传入数据。 FIFO包括用于查找任何传入数据包的数据的其他机制,并实现了替换策略,该策略可有效驱逐未按FIFO顺序访问的数据和两次连续访问后按FIFO顺序访问的数据。从FIFO缓存中主动删除顺序访问的数据与传统缓存的行为形成对比,传统缓存的行为是仅在必须将新数据带入缓存时才驱逐数据。这种主动删除策略允许FIFO缓存的大小受到限制。我们使用周期精确的完整系统仿真器评估提出的设计,该仿真器可模拟应用程序OS和网络协议栈的执行,并且该仿真器可为NIC,DMA基础结构,内存系统和多核处理器提供准确的仿真模型。我们的评估表明,为传入的网络数据建议的FIFO缓存能够通过简单地减少由传入的网络数据流造成的缓存污染来显着提高整体CPU性能。

著录项

  • 作者

    Chen, Shunfei.;

  • 作者单位

    State University of New York at Binghamton.;

  • 授予单位 State University of New York at Binghamton.;
  • 学科 Computer Science.
  • 学位 M.S.
  • 年度 2010
  • 页码 52 p.
  • 总页数 52
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 水产、渔业;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号