首页> 外文期刊>Networking, IEEE/ACM Transactions on >Simultaneously Reducing Latency and Power Consumption in OpenFlow Switches
【24h】

Simultaneously Reducing Latency and Power Consumption in OpenFlow Switches

机译:同时减少OpenFlow交换机的延迟和功耗

获取原文
获取原文并翻译 | 示例

摘要

The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efficiency of the switch while maintaining flexibility and a rich set of packet processing features. The OpenFlow architecture aims to provide flexibility and programmable packet processing to meet these converging needs. Of the many ways to create an OpenFlow switch, a popular choice is to make heavy use of ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume a considerable amount of power and, when used to match flows in an OpenFlow switch, put a bound on switch latency. In this paper, we propose enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacrificing rich policy-based forwarding enabled by the OpenFlow architecture. Packet prediction exploits the temporal locality in network communications to predict the flow classification of incoming packets. When predictions are correct, latency can be reduced, and significant power savings can be achieved from bypassing the full lookup process. Simulation studies using actual network traces indicate that correct prediction rates of 97% are achievable using only a small amount of prediction circuitry per port. These studies also show that prediction circuitry can help reduce the power consumed by a lookup process that includes a TCAM by 92% and simultaneously reduce the latency of a cut-through switch by 66%.
机译:以太网交换机是当今企业网络和数据中心的主要构建块。随着网络技术融合在单个以太网结构上,在保持灵活性和丰富的数据包处理功能的同时,不断提高压力的性能以提高交换机的性能和效率。 OpenFlow体系结构旨在提供灵活性和可编程的数据包处理,以满足这些融合的需求。在创建OpenFlow交换机的许多方法中,一个流行的选择是大量使用三态内容可寻址存储器(TCAM)。不幸的是,TCAM会消耗大量功率,并且当用于匹配OpenFlow交换机中的流时,会限制交换机延迟。在本文中,我们建议使用每个端口的数据包预测电路来增强OpenFlow以太网交换机,以便在不牺牲OpenFlow架构启用的基于策略的丰富转发的情况下,同时减少延迟和功耗。数据包预测利用网络通信中的时间局部性来预测传入数据包的流分类。当预测正确时,可以减少等待时间,并且可以通过绕过整个查找过程来节省大量功率。使用实际网络轨迹进行的仿真研究表明,每个端口仅使用少量预测电路即可达到97%的正确预测率。这些研究还表明,预测电路可以帮助将包含TCAM的查找过程所消耗的功率减少92%,同时将直通开关的等待时间减少66%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号