首页> 外文期刊>Future generation computer systems >Resource management for bursty streams on multi-tenancy cloud environments
【24h】

Resource management for bursty streams on multi-tenancy cloud environments

机译:多租户云环境中突发流的资源管理

获取原文
获取原文并翻译 | 示例

摘要

The number of applications that need to process data continuously over long periods of time has increased significantly over recent years. The emerging Internet of Things and Smart Cities scenarios also confirm the requirement for real time, large scale data processing. When data from multiple sources are processed over a shared distributed computing infrastructure, it is necessary to provide some Quality of Service (QoS) guarantees for each data stream, specified in a Service Level Agreement (SLA). SLAs identify the price that a user must pay to achieve the required QoS, and the penalty that the provider will pay the user in case of QoS violation. Assuming maximization of revenue as a Cloud provider's objective, then it must decide which streams to accept for storage and analysis; and how many resources to allocate for each stream. When the real-time requirements demand a rapid reaction, dynamic resource provisioning policies and mechanisms may not be useful, since the delays and overheads incurred might be too high. Alternatively, idle resources that were initially allocated for other streams could be re-allocated, avoiding subsequent penalties. In this paper, we propose a system architecture for supporting QoS for concurrent data streams to be composed of self-regulating nodes. Each node features an envelope process for regulating and controlling data access and a resource manager to enable resource allocation, and selective SLA violations, while maximizing revenue. Our resource manager, based on a shared token bucket, enables: (ⅰ) the redistribution of unused resources amongst data streams; and (ⅱ) a dynamic re-allocation of resources to streams likely to generate greater profit for the provider. We extend previous work by providing a Petri-net based model of system components, and we evaluate our approach on an OpenNebula-based Cloud infrastructure.
机译:近年来,需要长时间连续处理数据的应用程序数量已大大增加。新兴的物联网和智慧城市场景也确认了对实时,大规模数据处理的需求。在共享的分布式计算基础结构上处理来自多个源的数据时,有必要为每个数据流提供一些服务质量(QoS)保证,这在服务水平协议(SLA)中指定。 SLA标识用户为达到所需QoS而必须支付的价格,以及在违反QoS的情况下提供商将向用户支付的罚款。假设将收益最大化作为云提供商的目标,那么它必须决定接受哪些流进行存储和分析;以及为每个流分配多少资源。当实时需求需要快速反应时,动态的资源供应策略和机制可能就没有用,因为所产生的延迟和开销可能会太高。可替代地,可以重新分配最初为其他流分配的空闲资源,从而避免了随后的损失。在本文中,我们提出了一种系统架构,该系统架构支持由自调节节点组成的并发数据流的QoS。每个节点都具有一个用于控制和控制数据访问的信封过程,以及一个资源管理器,以实现资源分配和选择性的SLA违规,同时使收益最大化。我们的资源管理器基于共享令牌桶,可以:(ⅰ)在数据流之间重新分配未使用的资源; (ⅱ)将资源动态重新分配给可能为提供商带来更大利润的流。我们通过提供基于Petri-net的系统组件模型来扩展以前的工作,并评估基于OpenNebula的云基础架构上的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号