首页> 外文会议>Natural Computation (ICNC), 2008 Fourth International Conference on >Regional Cooperative Multi-agent Q-learning Based on Potential Field
【24h】

Regional Cooperative Multi-agent Q-learning Based on Potential Field

机译:基于势场的区域合作多主体Q学习

获取原文

摘要

More and more Artificial Intelligence researchers focused on the reinforcement learning (RL)-based multi-agent system (MAS). Multi-agent learning problems can in principle be solved by treating the joint actions of the agents as single actions and applying single-agent Q-learning. However, the number of joint actions is exponential in the number of agents, rendering this approach infeasible for most problems. In this paper we investigate a regional cooperative of the Q-function based on potential field by only considering the joint actions in those states in which coordination is actually required. In all other states single-agent Q-learning is applied. This offers a compact state-action value representation, without compromising much in terms of solution quality. We have performed experiments in RoboCup simulation-2D which is the ideal testing platform of Multi-agent systems and compared our algorithm to other multi-agent reinforcement learning algorithms with promising results
机译:越来越多的人工智能研究人员专注于加强学习(RL)基础的多功能系统(MAS)。多种代理学习问题原则上可以通过将药剂的联合作用视为单一措施并应用单体代理Q-Learning来解决。然而,联合行动的数量是代理人数的指数,对于大多数问题来说,使这种方法不可行。在本文中,我们根据潜在的领域调查Q函数的区域合作,只考虑实际要求协调的各国的联合行动。在所有其他状态下,应用单代理Q学习。这提供了紧凑的状态动作值表示,而不会在解决方案质量方面受到影响。我们在Robocup Simulation-2D中进行了实验,它是多助手系统的理想测试平台,并将我们的算法与其他多功能增强学习算法进行了比较,具有有前途的结果

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号