首页> 外文OA文献 >Predicting performance and scaling behaviour in a data center with multiple application servers
【2h】

Predicting performance and scaling behaviour in a data center with multiple application servers

机译:在具有多个应用程序服务器的数据中心中预测性能和扩展行为

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

As web pages become more user friendly and interactive we see that objectssuch as pictures, media files, cgi scripts and databases are more frequentlyused. This development causes increased stress on the servers due to intensifiedcpu usage and a growing need for bandwidth to serve the content. At thesame time users expect low latency and high availability. This dilemma canbe solved by implementing load balancing between servers serving content tothe clients. Load balancing can provide high availability through redundantserver solutions, and reduce latency by dividing load.This paper describes a comparative study of different load balancing algorithmsused to distribute packets among a set of equal web servers servingHTTP content. For packet redirection, a Nortel Application Switch 2208 willbe used, and the servers will be hosted on 6 IBM bladeservers. We will comparethree different algorithms: Round Robin, Least Connected and ResponseTime. We will look at properties such as response time, traffic intensity andtype. How will these algorithms perform when these variables change withtime. If we can find correlations between traffic intensity and efficiency of thealgorithms, we might be able to deduce a theoretical suggestion on how tocreate an adaptive load balancing scheme that uses current traffic intensity toselect the appropriate algorithm. We will also see how classical queueing algorithmscan be used to calculate expected response times, and whether thesenumbers conform to the experimental results. Our results indicate that thereare measurable differences between load balancing algorithms. We also foundthe performance of our servers to outperform the queueing models in most ofthe scenarios.
机译:随着网页变得更加用户友好和交互式,我们看到诸如图片,媒体文件,cgi脚本和数据库之类的对象得到了越来越多的使用。由于CPU使用率的提高以及对内容提供带宽的需求不断增长,这种发展给服务器带来了越来越大的压力。同时,用户期望低延迟和高可用性。可以通过在为客户端提供内容的服务器之间实现负载平衡来解决这一难题。负载平衡可通过冗余服务器解决方案提供高可用性,并通过划分负载来减少延迟。对于数据包重定向,将使用Nortel Application Switch 2208,并且这些服务器将托管在6台IBM刀片服务器上。我们将比较三种不同的算法:循环,最小连接和ResponseTime。我们将研究诸如响应时间,流量强度和类型之类的属性。当这些变量随时间变化时,这些算法将如何执行。如果我们可以找到流量强度和算法效率之间的相关性,我们也许可以得出关于如何创建一种使用当前流量强度来选择合适算法的自适应负载均衡方案的理论建议。我们还将看到如何使用经典的排队算法来计算预期的响应时间,以及这些数字是否符合实验结果。我们的结果表明,负载平衡算法之间存在可测量的差异。我们还发现,在大多数情况下,服务器的性能要优于排队模型。

著录项

  • 作者

    Undheim Gard;

  • 作者单位
  • 年度 2006
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号