首页> 外文期刊>International Journal of Intelligent Computing and Cybernetics >Decentralized decision-making technique for dynamic coalition of resource-bounded autonomous agents
【24h】

Decentralized decision-making technique for dynamic coalition of resource-bounded autonomous agents

机译:资源受限的自治主体动态联盟的分散决策技术

获取原文
获取原文并翻译 | 示例
           

摘要

Purpose - The purpose of this paper is to extend the existing approaches of coalition formation to how to adapt dynamically the size of the coalition according to the complexity of the task to be accomplished. Design/methodology/approach - A considerable amount of attention has been paid to the coalition formation problem to deal efficiently with tasks needing more than one agent (i.e. robot). However, little attention has been paid to the problem of monitoring a coalition during the execution by modifying it according to the progress of the accomplishment of the task. In this paper, the authors consider a coalition of resource-bounded autonomous agents with anytime behavior solving a common complex task. There is no central control component. Agents can observe the effect of the other agents' actions. They can decide whether they should continue to contribute in solving the common task or to stop their contribution and to leave the coalition. This decision is made in a distributed way. The objective is to avoid the waste of resources and time by using the same coalition along the task accomplishment while some agents become unnecessary to pursue the accomplishment of the task. The authors formalize this decentralized decision-making problem as a decentralized Markov decision process (DEC-MDP). Findings - The paper results in a framework leading to Coal-DEC-MDP, which allows each agent to decide whether to stay in the coalition or leave it by estimating the progress on the task accomplishment. Research limitations/implications - The approach could be extended to deal with more than one coalition. Practical implications - Decentralized control of a fleet of robots accomplishing a mission. Originality/value - The paper deals with a new problem of adapting dynamically the coalition to the target task and the use of DEC-MDPs.
机译:目的-本文的目的是将现有的联盟形成方法扩展到如何根据要完成的任务的复杂性动态调整联盟的规模。设计/方法/方法-已经对联盟形成问题给予了相当多的关注,以有效地处理需要多个代理(即机器人)的任务。但是,很少有人关注通过根据任务完成的进度对其进行修改来在执行期间监视联盟的问题。在本文中,作者考虑了由资源有限的自治代理组成的联盟,该联盟具有随时解决通用复杂任务的行为。没有中央控制组件。代理可以观察其他代理动作的效果。他们可以决定是否应该继续为解决共同任务做出贡献,还是应该停止贡献并退出联盟。该决定以分布式方式做出。目的是通过在任务完成过程中使用相同的联盟来避免浪费资源和时间,同时一些代理商变得不必要地追求任务的完成。作者将该分散决策问题形式化为分散马尔可夫决策过程(DEC-MDP)。调查结果-该论文得出了一个导致Coal-DEC-MDP的框架,该框架使每个代理可以通过估计任务完成的进度来决定是留在联盟还是离开联盟。研究的局限性/意义-该方法可以扩展为处理多个联盟。实际意义-对完成任务的一组机器人进行分散控制。原创性/价值-本文提出了一个新问题,即使联盟动态地适应目标任务并使用DEC-MDP。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号