首页> 外文会议>Symposium on Adaptive Agents and Multi-Agent Systems >Reinforcement Learning of Coordination in Heterogeneous Cooperative Multi-agent Systems
【24h】

Reinforcement Learning of Coordination in Heterogeneous Cooperative Multi-agent Systems

机译:异构合作多助理系统协调协调的加固学习

获取原文

摘要

Most approaches to the learning of coordination in multi-agent systems (MAS) to date require all agents to use the same learning algorithm with similar (or even the same) parameter settings. In today's open networks and high inter-connectivity such an assumption becomes increasingly unrealistic. Developers are starting to have less control over the agents that join the system and the learning algorithms they employ. This makes effective coordination and good learning performance extremely difficult to achieve, especially in the absence of learning agent standards. In this paper we investigate the problem of learning to coordinate with heterogeneous agents. We show that an agent employing the FMQ algorithm, a recently developed multi-agent learning method, has the ability to converge towards the optimal joint action when teamed-up with one or more simple Q-learners. Specifically, we show such convergence in scenarios where simple Q-learners alone are unable to converge towards an optimum. Our results show that system designers may improve learning and coordination performance by adding a "smart" agent to the MAS.
机译:大多数迄今为止在多代理系统(MAS)中协调的方法都需要所有代理使用相同的学习算法与相似(或甚至相同)的参数设置。在今天的开放网络中和高间相互间的这种假设变得越来越不切实际。开发人员开始对加入系统的代理商和他们所采用的学习算法的代理商进行控制。这使得有效的协调和良好的学习表现极难实现,特别是在没有学习代理标准的情况下。在本文中,我们调查了学习与异质代理协调的问题。我们表明,采用FMQ算法的代理商是最近开发的多代理学习方法,能够在与一个或多个简单的Q-MeadioS联系起来时会聚到最佳联合行动。具体而言,我们在SIXING Q-Meadyers无法达到最佳方案的情况下显示这种融合。我们的结果表明,系统设计人员可以通过向MAS添加“智能”代理来改善学习和协调性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号