首页> 外文学位 >Incentive-centered design for scheduling in parallel and distributed systems.
【24h】

Incentive-centered design for scheduling in parallel and distributed systems.

机译:以激励为中心的并行和分布式系统中的调度设计。

获取原文
获取原文并翻译 | 示例

摘要

Distributed systems comprise components that are owned and operated by autonomous organizations. These organizations are rational (self-interested and welfare-maximizing) and they have no a prior motivation for cooperation. Rationality leads them to manipulate the systems if it benefits them to do so. Organizations respond to incentives. To prevent manipulations, incentives must be explicitly aligned with the system's objectives. Incentive-centered design (ICD), which integrates the motivational human behavior aspect of social sciences with the computational tractability aspect of computer science, specifically addresses these concerns and provides methods to mitigate them.;We examine the incentives present in several distributed system resource allocation problems. We model these problems as games and then used ICD to devise protocols that obtained their desired objectives in a self-interested, competitive setting. The first problems we examine are designing incentive-compatible mechanisms for scheduling divisible loads in distributed systems. Divisible loads is a form of data parallelism in which large data sets are partitioned into fractions. All fractions require an identical type of processing. Scheduling requires knowledge of the processors. Since the processors are rational, they may choose to misreport their capacities, which adversely affects the quality of scheduling. We design several mechanisms that obtain optimal schedules in this setting. Each mechanism is specific to a network architecture as the architecture affects partitioning and the optimality conditions. We examine three architectures: bus, linear, and tree networks. For the bus network, we design three classical centralized mechanisms. The centralized mechanisms are not feasible for the tree and linear networks as the processors cannot communicate directly with the mechanism center, a limitation of these architectures. Instead, we design distributed mechanisms in which the information and algorithms are distributed throughout the network. These mechanisms introduce new avenues for manipulation and we address this by creating incentives for processors to monitor one another. We then examine divisible load scheduling using coalitional games. An application owner receives a payoff when her job is finished. The amount she receives though decreases with time and furthermore, the payoff must be shared with the processors computing her job. She forms a coalition with a set of processors to maximize her profit, which is the difference between payoff and costs. We model this problem as a coalitional game and we show that stable coalitions between application users and processors occur.;We next examine parallel job scheduling in parallel systems comprising identical processors. A parallel job differs from a sequential job or task in that it can execute on two or more processors. We consider two different types of parallel job scheduling. The first type we examine is the batch scheduling of rigid jobs. Batch scheduling requires that all jobs are present at the start of scheduling. A parallel job is rigid if it requires a fixed number of processors. If fewer processors are assigned, the job fails. If more processors are assigned, the job does not speedup. Associated with each job is a deadline and payoff. The job's owner receives the payoff if her job completes by its deadline; otherwise, her payoff is zero. The scheduling objective then is to maximize the sum of payoffs. The users are rational: they will misreport their jobs' parameters if it benefits them to do so. We design an incentive-compatible scheduling mechanism aimed specifically for this problem. Because the mechanism is incentive compatible, rational users choose to truthfully declare their parameters to the scheduler. The second type of parallel job scheduling we consider is the online scheduling of malleable jobs. Online scheduling differs from the batch scheduling in that the scheduler cannot foresee jobs arriving in advance. The scheduling objective must be optimized with the lack of information. Unlike rigid jobs, a malleable jobs can be adapted to execute on any number of processors. Each job is associated with an arrival time (release time), deadline, and payoff. Similar to the rigid job problem, if the job completes by its deadline the self-interested user receives the payoff, otherwise, she receives zero. Besides the online aspect, this problem differs from the rigid one in that the scheduler must choose the number of processors to assign to the job. For this problem we design an online, incentive-compatible mechanism. If a job is scheduled to execute, the mechanism assigns it the number of processors that minimizes its execution time.
机译:分布式系统包括由自治组织拥有和运营的组件。这些组织是理性的(个人利益和福利最大化),他们没有合作的先发动机。合理性会导致他们操纵系统(如果这样做对他们有利)。组织对激励措施做出反应。为了防止操纵,激励措施必须明确与系统目标保持一致。以激励为中心的设计(ICD)将社会科学的动机人类行为方面与计算机科学的计算易处理性方面相结合,专门解决了这些问题并提供了缓解这些问题的方法。我们研究了几种分布式系统资源分配中存在的激励因素问题。我们将这些问题建模为游戏,然后使用ICD来设计能够在自私,竞争的环境中实现其预期目标的协议。我们研究的第一个问题是设计激励兼容机制,以调度分布式系统中的可分担负载。可分割的负载是数据并行性的一种形式,其中将大数据集划分为多个部分。所有部分都需要相同类型的处理。调度需要了解处理器。由于处理器是理性的,因此他们可能会选择错误地报告其容量,从而对调度质量产生不利影响。我们设计了几种在这种情况下可以获得最佳计划的机制。每种机制特定于网络体系结构,因为该体系结构会影响分区和最优性条件。我们研究了三种架构:总线,线性和树状网络。对于总线网络,我们设计了三种经典的集中式机制。集中式机制对于树形网络和线性网络不可行,因为处理器无法直接与机制中心进行通信,这是这些体系结构的局限性。相反,我们设计了分布式机制,其中信息和算法分布在整个网络中。这些机制为操纵提供了新途径,我们通过激励处理器相互监视来解决这一问题。然后,我们使用联盟博弈检查可分割的负载调度。应用程序所有者在完成工作后会获得回报。尽管她的收入随时间减少,而且还必须与计算她的工作的处理器共享收益。她与一组加工商组成联盟,以最大限度地提高利润,这就是收益与成本之间的差额。我们将此问题建模为一个联合博弈,并表明在应用程序用户和处理器之间出现了稳定的联盟。;接下来,我们研究包含相同处理器的并行系统中的并行作业调度。并行作业与顺序作业或任务的不同之处在于,它可以在两个或更多处理器上执行。我们考虑两种不同类型的并行作业调度。我们研究的第一类是刚性作业的批处理调度。批处理调度要求在调度开始时存在所有作业。如果并行作业需要固定数量的处理器,则它很严格。如果分配了更少的处理器,则作业将失败。如果分配了更多处理器,则作业不会加速。与每个工作相关的是最后期限和回报。如果工作的所有者在截止日期之前完成工作,则其所有者将获得回报;否则,她的收益为零。然后,调度目标是使收益之和最大化。用户是理性的:如果错误地回报了他们的工作参数,他们会这样做。我们设计了专门针对此问题的激励兼容调度机制。因为该机制是激励兼容的,所以理性的用户选择向调度程序如实声明其参数。我们考虑的第二种并行作业调度是可延展作业的在线调度。在线计划与批处理计划的不同之处在于,计划程序无法预见作业的提前到达。必须在缺乏信息的情况下优化调度目标。与严格的作业不同,可延展的作业可以调整为在任意数量的处理器上执行。每个工作都与到达时间(发布时间),截止日期和收益相关联。与严格的工作问题类似,如果工作在截止日期之前完成,则自利用户将获得收益,否则,她将得到零。除了在线方面之外,此问题与严格的问题不同,调度程序必须选择要分配给作业的处理器数量。针对这个问题,我们设计了一种在线的,激励兼容的机制。如果计划执行作业,则该机制会为其分配处理器数量,以最大程度地减少其执行时间。

著录项

  • 作者

    Carroll, Thomas E.;

  • 作者单位

    Wayne State University.;

  • 授予单位 Wayne State University.;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2009
  • 页码 198 p.
  • 总页数 198
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号