首页> 外文会议>ACM/IEEE Annual International Symposium on Computer Architecture >HALO: Accelerating Flow Classification for Scalable Packet Processing in NFV
【24h】

HALO: Accelerating Flow Classification for Scalable Packet Processing in NFV

机译:HALO:加速流分类以用于NFV中的可伸缩数据包处理

获取原文

摘要

Network Function Virtualization (NFV) has become the new standard in the cloud platform, as it provides the flexibility and agility for deploying various network services on general-purpose servers. However, it still suffers from sub-optimal performance in software packet processing. Our characterization study of virtual switches shows that the flow classification is the major bottleneck that limits the throughput of the packet processing in NFV, even though a large portion of the classification rules can be cached in the last level cache (LLC) in modern servers. To overcome this bottleneck, we propose Halo, an effective near-cache computing solution for accelerating the flow classification. Halo exploits the hardware parallelism of the cache architecture consists of Non-Uniform Cache Access (NUCA) and Caching and Home Agent (CHA) available in almost all Intel® multi-core CPUs. It associates the accelerator with each CHA component to speed up and scale the flow classification within LLC. To make Halo more generic, we extend the x86-64 instruction set with three simple data lookup instructions for utilizing the proposed near-cache accelerators. We develop Halo with the full-system simulator gem5. The experiments with a variety of real-world workloads of network services demonstrate that Halo improves the throughput of basic flow-rule lookup operations by 3.3×, and scales the representative flow classification algorithm - tuple space search by up to 23.4× with negligible negative impact on the performance of collocated network services, compared with state-of-the-art software-based solutions. Halo also performs up to 48.2× more energy-efficient than the fastest but expensive ternary content-addressable memory (TCAM), with trivial power and area overhead.
机译:网络功能虚拟化(NFV)已成为云平台中的新标准,因为它为在通用服务器上部署各种网络服务提供了灵活性和敏捷性。但是,它仍然遭受软件分组处理中的次优性能。我们对虚拟交换机的表征研究表明,流分类是限制NFV中数据包处理吞吐量的主要瓶颈,即使大部分分类规则都可以缓存在现代服务器的最后一级缓存(LLC)中。为了克服这个瓶颈,我们提出了Halo,这是一种有效的近缓存计算解决方案,可用于加速流分类。 Halo利用了高速缓存体系结构的硬件并行性,该体系结构由几乎所有英特尔®多核CPU中可用的非统一高速缓存访​​问(NUCA)以及高速缓存和本地代理(CHA)组成。它将加速器与每个CHA组件相关联,以加快和扩展LLC中的流分类。为了使Halo更通用,我们使用三个简单的数据查找指令扩展了x86-64指令集,以利用建议的近缓存加速器。我们使用完整系统的模拟器gem5开发Halo。通过对各种实际的网络服务工作负载进行的实验表明,Halo将基本流规则查找操作的吞吐量提高了3.3倍,并将代表性的流分类算法(元组空间搜索)扩展了23.4倍,而负面影响却可以忽略不计与基于软件的最新解决方案相比,在并置网络服务的性能方面的优势。与最快但价格昂贵的三元内容可寻址存储器(TCAM)相比,Halo的能源效率也高出48.2倍,而功耗和面积开销却微不足道。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号