首页> 外文会议>Workshop on Balancing Reactivity and Social Deliberation in Multi-Agent Systems >Deliberation Levels in Theoretic-Decision Approaches for Task Allocation in Resource-Bounded Agents
【24h】

Deliberation Levels in Theoretic-Decision Approaches for Task Allocation in Resource-Bounded Agents

机译:资源限制代理中任务分配方法决策方法中的审议水平

获取原文

摘要

In this paper we develop a new model of task allocation in distributed and cooperative resource-bounded agents using a theoretic-decision approach and their effect on the responsiveness of the system. Two architectures for task allocation and their level of deliberation are discussed. In both architectures the following holds: (1) Agents have limited resources and estimated distributions over resource execution tasks and (2) Agents create new tasks that they send to a central controller to distribute among them. The main difference between the two architectures resides in the place where the allocation decision-making process is performed. In the first architecture, we assume that the central controller builds an optimal and global decision on task allocation using a dynamic programming model and an estimated distribution over resources. In the second architecture, we assume that each agent builds a locally optimal decision and the central controller coordinates these distributed locally optimal decisions. In both architectures, we formulate the standard problem of task allocation as a Markov Decision Process (MDP). The states of the MDP represent the current state of the allocation in terms of tasks allocated to each agent and available resources. It is well-known that such approaches have a high-level of deliberation that can affect their efficiency in dynamic situations. We then discuss the effect of the two architectures on the balance between the deliberative and reactive behavior of the system.
机译:在本文中,我们使用理论决策方法在分布式和合作资源有限代理中开发了一个新的任务分配模型及其对系统响应性的影响。讨论了任务分配的两个架构及其审议程度。在两个架构中,以下保留:(1)代理商资源有限,估计资源执行任务和(2)代理商创建他们发送到中央控制器以分配的新任务。两种体系结构之间的主要区别位于执行分配决策过程的位置。在第一架构中,我们假设中央控制器使用动态编程模型和估计的资源分布在任务分配中构建最佳和全局决定。在第二架构中,我们假设每个代理都构建了局部最佳决策,中央控制器协调这些分布式局部最佳决策。在这两个架构中,我们将任务分配的标准问题作为Markov决策过程(MDP)制定。 MDP的状态代表了分配给每个代理和可用资源的任务的当前状态。众所周知,这种方法具有高水平的审议,可以影响其在动态情况下的效率。然后,我们讨论了两种架构对系统的审议和反应行为之间的平衡的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号