首页> 外文会议>Annual conference on information sciences and systems >Worst-Case Delay Bounds for Packetizing Time-Varying Fluid Schedules for a Single Server in a Packet-Switched Network
【24h】

Worst-Case Delay Bounds for Packetizing Time-Varying Fluid Schedules for a Single Server in a Packet-Switched Network

机译:用于分组交换网络中单个服务器的单个服务器的分组时变流量调度的最坏情况延迟界限

获取原文

摘要

An on-line, processor sharing problem for a single server within a packet-switched network is considered. The input to a problem, at each time step, is a set of desired processing rates. These rates in general cannot be exactly achieved since they treat the incoming data as fluid, that is, they assume arbitrarily small fractions of packets can be processed at each time step. The goal of the problem is to closely approximate the given, time-varying sequence of desired rates by a sequence of processor allocations in which only whole packets are processed. The focus of this paper is bounding the additional packet delay incurred in using such an approximation. In this paper, we analyze the worst-case, packet delay in a single server, traffic scheduler that processes N incoming data flows, when total link usage is at most the server capacity. We consider how optimal delay bounds depend on the resources available to a packet-scheduling algorithm. In particular, we analyze the trade-offs between speedup and lookahead (knowledge of the fluid policy's future behavior) required to bound delay for arbitrary, time-varying fluid policies. First, we prove for scheduling algorithms using bounded lookahead and no speedup that worst-case packet delay grows linearly with the number of data flows N. Next, we prove for algorithms using no lookahead that worst-case delay grows linearly with N and decreases exponentially with speedup. Lastly, we look at algorithms using both lookahead and speedup; for fixed speedup and maximum tolerable delay, we lower bound the necessary lookahead (by a function linear in N) and upper bound sufficient lookahead (by a function linear in N).
机译:考虑在线,处理器在分组交换网络中为单个服务器分享问题。在每个时间步骤中对问题的输入是一组期望的处理速率。由于它们将进入的数据视为流体,因此,这些速率通常无法精确地实现,即,它们假设在每个时间步骤可以在每个时间处理任意少量的分组。问题的目标是通过一系列处理器分配序列密切地近似所需速率的给定,时间变化率,其中仅处理整个数据包。本文的焦点是在使用这种近似时产生的附加分组延迟。在本文中,我们分析了单个服务器中的最坏情况,数据包延迟,处理n个进入数据流的流量调度器,当全部链路使用是大多数服务器容量时。我们考虑如何最佳延迟界限取决于数据包调度算法可用的资源。特别是,我们分析了加速和寻求的权衡(流体政策的知识的未来行为),要求延迟任意,时变流体政策。首先,我们证明了使用有界保护的调度算法,没有加速,最坏情况包延迟随着数据流量N的数量线性而增加。接下来,我们证明了使用NO SoundAhead的算法证明了最坏情况下的延迟与N线性增长并指数逐渐减小。加速。最后,我们使用SpeeAhead和Speedup来看算法;为了固定加速和最大可容忍的延迟,我们将绑定必要的寻找(通过N个功能线性)和上限足够的LookAhead(通过N中的功能线性)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号