首页> 外文会议>International Conference on Ubiquitous and Future Networks >The virtualized MPTCP proxy performance in cellular network
【24h】

The virtualized MPTCP proxy performance in cellular network

机译:蜂窝网络中的虚拟化MPTCP代理性能

获取原文

摘要

For massive traffic handling in the cellular network, network function virtualization (NFV) is considered to be the most cost-efficient solution in the 5G networks. Since NFV decouples the network function from the underlying hardware, the purpose-built machines can be replaced by the commodity hardware. However, NFV might suffer from the very fact that it is a solely software-based solution. The objective of this paper is to find out the NFV performance issue in cellular network. Also, we want to investigate whether NFV is comparable with MPTCP connections. Since not many servers are MPTCPenabled, a SOCKS proxy is usually deployed in between to enable MPTCP connections. We regarded a virtualized proxy as an NFV instance and set up two types of virtualized SOCKS proxies, one as KVM and the other as docker. We also tried to find out if there is a performance difference between hypervisor-based and container-based virtualization in our setting. As the results show, the docker proxy performs better than the KVM proxy. In terms of resource consumption, for example, the docker utilized 31.9% of host CPU, whereas the KVM consumed 36.9% when both of them handling 2,000 concurrent requests. The throughput comparison of different TCP connections reflects the characteristics of MPTCP flow that performs best in a long and large flow. The latency between the server and the proxy determined the throughput of MPTCP with a virtualized proxy. If the latency between the server and the proxy gets larger (RTT 100ms), the MPTCP proxy throughput of all three different flow got worse than the single TCP connections, whether it is a short flow (1KB) or a long flow (164MB). However, if the latency is in the middle range (RTT 50ms), the MPTCP proxy throughput of a short (1KB) and medium (900KB) flow works poorly, but a long flow (164MB) still works better than the single TCP connections.
机译:对于蜂窝网络中的大量流量处理,网络功能虚拟化(NFV)被认为是5G网络中最具成本效益的解决方案。由于NFV使网络功能与底层硬件脱钩,因此专用机器可以由商用硬件代替。但是,NFV可能会遭受这样一个事实,那就是它完全是基于软件的解决方案。本文的目的是找出蜂窝网络中的NFV性能问题。另外,我们想研究NFV是否可与MPTCP连接媲美。由于启用MPTCP的服务器并不多,因此通常在两者之间部署SOCKS代理以启用MPTCP连接。我们将虚拟代理视为NFV实例,并设置了两种类型的虚拟SOCKS代理,一种是KVM,另一种是docker。我们还尝试找出在我们的环境中基于虚拟机管理程序的虚拟化和基于容器的虚拟化之间是否存在性能差异。结果显示,Docker代理的性能优于KVM代理。例如,就资源消耗而言,泊坞窗使用了31.9%的主机CPU,而KVM在处理2,000个并发请求时消耗了36.9%。不同TCP连接的吞吐量比较反映了MPTCP流的特性,该特性在长而大的流中表现最佳。服务器和代理之间的等待时间决定了虚拟代理的MPTCP吞吐量。如果服务器和代理之间的等待时间变大(RTT 100毫秒),则这三个不同流的MPTCP代理吞吐量都会比单个TCP连接差,无论是短流(1KB)还是长流(164MB)。但是,如果延迟处于中间范围(RTT 50ms),则短流(1KB)和中流(900KB)的MPTCP代理吞吐量工作不佳,但长流(164MB)仍比单个TCP连接更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号