...
首页> 外文期刊>Performance evaluation review >Delay-Optimal Policies in Partial Fork-Join Systems with Redundancy and Random Slowdowns
【24h】

Delay-Optimal Policies in Partial Fork-Join Systems with Redundancy and Random Slowdowns

机译:延迟 - 部分叉连接系统中的延迟最佳策略,冗余和随机减速

获取原文
获取原文并翻译 | 示例
           

摘要

We consider a large distributed service system consisting of n homogeneous servers with infinite capacity FIFO queues. Jobs arrive as a Poisson process of rate λn/k_n (for some positive constant A and integer k_n). Each incoming job consists of k_n identical tasks that can be executed in parallel, and that can be encoded into at least k_n "replicas" of the same size (by introducing redundancy) so that the job is considered to be completed when any k_n replicas associated with it finish their service. Moreover, we assume that servers can experience random slowdowns in their processing rate so that the service time of a replica is the product of its size and a random slowdown.First, we assume that the server slowdowns are shifted exponential and independent of the replica sizes. In this setting we show that the delay of a typical job is asymptotically minimized (as n →∞) when the number of replicas per task is a constant that only depends on the arrival rate λ, and on the expected slowdown of servers. Second, we introduce a new model for the server slowdowns in which larger tasks experience less variable slowdowns than smaller tasks. In this setting we show that, under the class of policies where all replicas start their service at the same time, the delay of a typical job is asymptotically minimized (as n →∞) when the number of replicas per task is made to depend on the actual size of the tasks being replicated, with smaller tasks being replicated more than larger tasks.
机译:我们考虑一个大型分布式服务系统,由具有无限容量FIFO队列的N个均匀服务器组成。作业作为速率λn/ k_n的泊松过程到达(对于一些正常常数a和整数k_n)。每个传入作业由k_n相同的任务组成,可以并行执行,并且可以将其编码为相同大小的至少k_n“replicas”(通过引入冗余),以便在关联的任何k_n副本时被认为完成作业用它完成他们的服务。此外,我们假设服务器可以在处理速率中遇到随机减速,以便副本的服务时间是其大小的乘积和随机速度速度。首先,我们假设服务器减速是转移指数和独立于副本尺寸的转移。在此设置中,我们表明,当每项任务的副本数量是仅取决于到达速率λ的常数时,典型作业的延迟最小化(正如N→∞),并且在服务器的预期放缓上。其次,我们为服务器放缓介绍了一个新模型,其中较大的任务越来越多的可变减速比较小的任务。在此设置中,我们展示了所有副本同时启动服务的策略下,典型作业的延迟是渐近最小化的(作为n→∞),当完成每个任务的副本数量时复制任务的实际大小,具有较小的任务,可以复制多于较大的任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号