首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >Integrating Concurrency Control in n-Tier Application Scaling Management in the Cloud
【24h】

Integrating Concurrency Control in n-Tier Application Scaling Management in the Cloud

机译:在云中集成并发控制在n层应用程序扩展管理中

获取原文
获取原文并翻译 | 示例

摘要

Scaling complex distributed systems such as e-commerce is an importance practice to simultaneously achieve high performance and high resource efficiency in the cloud. Most previous research focuses on hardware resource scaling to handle runtime workload variation. Through extensive experiments using a representative n-tier web application benchmark (RUBBoS), we demonstrate that scaling an n-tier system by adding or removing VMs without appropriately re-allocating soft resources (e.g., server threads and connections) may lead to significant performance degradation resulting from implicit change of request processing concurrency in the system, causing either over-or under-utilization of the critical hardware resource in the system. We build a concurrency-aware model that determines a near optimal soft resource allocation of each tier by combining some operational queuing laws and the fine-grained online measurement data of the system. We then develop a dynamic concurrency management (DCM) framework that integrates the concurrency-aware model to intelligently reallocate soft resources in the system during the system scaling process. We compare DCM with Amazon EC2-AutoScale, the state-of-the-art hardware only scaling management solution using six real-world bursty workload traces. The experimental results show that DCM achieves significantly shorter tail latency and higher throughput compared to Amazon EC2-AutoScale under all the workload traces.
机译:缩放复杂分布式系统,例如电子商务是一个重要的做法,以同时实现云中的高性能和高资源效率。最先前的研究侧重于硬件资源缩放来处理运行时工作负载变化。通过使用代表性的N层Web应用程序基准(RUBBOS)进行广泛的实验,我们演示通过添加或移除VM而不适当地重新分配软资源(例如,服务器线程和连接)来缩放n层系统可能导致显着性能由系统中请求处理并发性的隐式更改产生的劣化导致系统中的关键硬件资源的过度或利用不足。我们构建一个并发感知模型,通过组合一些操作排队法律和系统的细粒度在线测量数据来确定每个层的近最佳软资源分配。然后,我们开发一个动态的并发管理(DCM)框架,它将并发感知模型集成到系统缩放过程中智能地重新分配系统中的软资源。我们将DCM与Amazon EC2-AutoScale进行比较,最先进的硬件只使用六个真实世界突发工作负载痕迹来扩展管理解决方案。实验结果表明,与所有工作量迹线下的亚马逊EC2-AutoScale相比,DCM达到了较短的尾部延迟和更高的吞吐量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号