首页> 外文会议>Annual conference on Neural Information Processing Systems >Learning to Communicate with Deep Multi-Agent Reinforcement Learning
【24h】

Learning to Communicate with Deep Multi-Agent Reinforcement Learning

机译:学习与深度多智能体强化学习进行沟通

获取原文

摘要

We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains.
机译:我们考虑多个代理在环境中感知并采取行动的问题,目的是最大化其共享效用。在这些环境中,代理必须学习通信协议才能共享解决任务所需的信息。通过拥抱深度神经网络,我们能够证明在复杂环境中从通信难题和多智能体计算机视觉问题(具有部分可观察性)的启发下进行协议的端到端学习。我们提出了两种在这些领域中进行学习的方法:强化代理人间学习(RIAL)和差异化代理人间学习(DIAL)。前者使用深度Q学习,而后者则利用以下事实:在学习过程中,代理可以通过(嘈杂的)通信通道反向传播错误导数。因此,这种方法使用集中式学习但分散式执行。我们的实验为研究通信协议的学习引入了新的环境,并提出了一组在这些领域取得成功必不可少的工程创新。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号