首页> 外文会议>International Conference on Autonomous Agents and Multiagent Systems >Coordination Structures Generated by Deep Reinforcement Learning in Distributed Task Executions: Extended Abstract
【24h】

Coordination Structures Generated by Deep Reinforcement Learning in Distributed Task Executions: Extended Abstract

机译:分布式任务执行中深增强学习产生的协调结构:扩展摘要

获取原文

摘要

We investigate the coordination structures generated by deep Q-network (DQN) in a distributed task execution. Cooperation and coordination are the crucial issues in multi-agent systems, and very sophisticated design or learning is required in order to achieve effective structures or regimes of coordination. In this paper, we show the results that agents establish the division of labor in a bottom-up manner by determining their implicit responsible area when input structure for DQN is constituted by their own observation and absolute location.
机译:我们在分布式任务执行中调查深Q-Network(DQN)产生的协调结构。 合作与协调是多代理系统中的至关重要问题,需要非常复杂的设计或学习,以实现协调的有效结构或制度。 在本文中,我们展示了代理通过确定其自身观察和绝对位置的输入结构时,通过确定其隐性负责区域,以自下而上的方式建立劳动力划分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号