首页> 外文会议>International conference on principles and practice of multi-agent systems >Coordination in Collaborative Work by Deep Reinforcement Learning with Various State Descriptions
【24h】

Coordination in Collaborative Work by Deep Reinforcement Learning with Various State Descriptions

机译:具有各种状态描述的深度强化学习在协作工作中的协调

获取原文

摘要

Cooperation and coordination are sophisticated behaviors and are still major issues in studies on multi-agent systems because how to cooperate and coordinate depends on not only environmental characteristics but also the behaviors/strategies that closely affect each other. On the other hand, recently using the multi-agent deep reinforcement learning (MADRL) has received much attention because of the possibility of learning and facilitating their coordinated behaviors. However, the characteristics of socially learned coordination structures have been not sufficiently clarified. In this paper, by focusing on the MADRL in which each agent has its own deep Q-networks (DQNs), we show that the different types of input to the network lead to various coordination structures, using the pickup and floor laying problem, which is an abstract form related to our target problem. We also indicate that the generated coordination structures affect the entire performance of multi-agent systems.
机译:合作与协调是复杂的行为,仍然是多主体系统研究中的主要问题,因为如何进行合作与协调不仅取决于环境特征,而且还取决于彼此密切影响的行为/策略。另一方面,近来使用多主体深度强化学习(MADRL)由于学习和促进其协调行为的可能性而备受关注。但是,尚未充分弄清社会习得的协调结构的特征。在本文中,通过关注每个代理都有其自己的深层Q网络(DQN)的MADRL,我们展示了使用拾音和地板铺设问题,网络的不同类型的输入会导致各种协调结构,从而是与目标问题有关的抽象形式。我们还指出,生成的协调结构会影响多主体系统的整体性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号