首页> 外文会议>IEEE International Parallel and Distributed Processing Symposium Workshops and PhD Forum >Network Delay-Aware Load Balancing in Selfish and Cooperative Distributed Systems
【24h】

Network Delay-Aware Load Balancing in Selfish and Cooperative Distributed Systems

机译:自私和协作分布式系统中的网络延迟感知负载平衡

获取原文

摘要

We consider a geographically distributed request processing system composed of various organizations and their servers connected by the Internet. The latency a user observes is a sum of communication delays and the time needed to handle the request on a server. The handling time depends on the server congestion, i.e. the total number of requests a server must handle. We analyze the problem of balancing the load in a network of servers in order to minimize the total observed latency. We consider both cooperative and selfish organizations (each organization aiming to minimize the latency of the locally-produced requests). The problem can be generalized to the task scheduling in a distributed cloud; or to content delivery in an organizationally-distributed CDNs. In a cooperative network, we show that the problem is polynomially solvable. We also present a distributed algorithm iteratively balancing the load. We show how to estimate the distance between the current solution and the optimum based on the amount of load exchanged by the algorithm. During the experimental evaluation, we show that the distributed algorithm is efficient, therefore it can be used in networks with dynamically changing loads. In a network of selfish organizations, we prove that the price of anarchy (the worst-case loss of performance due to selfishness) is low when the network is homogeneous and the servers are loaded (the request handling time is high compared to the communication delay). After relaxing these assumptions, we assess the loss of performance caused by the selfishness experimentally, showing that it remains low. Our results indicate that a set of servers handling requests, connected in a heterogeneous network, can be efficiently managed by a distributed algorithm. Additionally, even if the network is organizationally distributed, with individual organizations optimizing performance of their requests, the network remains efficient.
机译:我们考虑一个地理上分散的请求处理系统,该系统由各种组织及其通过Internet连接的服务器组成。用户观察到的等待时间是通信延迟与在服务器上处理请求所需的时间之和。处理时间取决于服务器拥塞,即服务器必须处理的请求总数。我们分析了在服务器网络中平衡负载的问题,以最大程度地减少总的观察到的延迟。我们同时考虑合作组织和自私组织(每个组织旨在最大程度地减少本地产生的请求的延迟)。该问题可以推广到分布式云中的任务调度。或组织分发的CDN中的内容交付。在合作网络中,我们证明了该问题可以多项式解决。我们还提出了一种迭代均衡负载的分布式算法。我们展示了如何根据算法交换的负载量来估算当前解与最优解之间的距离。在实验评估过程中,我们证明了分布式算法是有效的,因此可以在负载动态变化的网络中使用。在自私组织的网络中,我们证明了当网络同质且服务器已加载时,无政府状态的价格(由于自私而导致的最坏情况下的性能损失)较低(与通信延迟相比,请求处理时间较长) )。放宽这些假设后,我们通过实验评估由自私引起的性能损失,表明其仍然很低。我们的结果表明,一组处理请求的服务器(连接在异构网络中)可以通过分布式算法进行有效管理。此外,即使网络是按组织分布的,并且各个组织都可以优化其请求的性能,但网络仍会保持高效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号