首页> 外文期刊>Procedia Computer Science >A unified framework for reinforcement learning, co-learning and meta-learning how to coordinate in collaborative multi-agent systems
【24h】

A unified framework for reinforcement learning, co-learning and meta-learning how to coordinate in collaborative multi-agent systems

机译:强化学习,共同学习和元学习的统一框架,如何在协作式多智能体系统中进行协调

获取原文
       

摘要

Coordination among multiple autonomous, distributed cognitive agents is one of the most challenging and ubiquitous problems in Distributed AI and its applications in general, and in collaborative multi-agent systems in particular. A particularly prominent problem in multi-agent coordination is that of group, team or coalition formation. A considerable majority of the approaches to this problem found in the literature assume fixed interactions among autonomous agents involved in the coalition formation process. Moreover, most of the prior research where agents are actually able to learn and adapt based on their past interactions mainly focuses on reinforcement learning techniques at the individual agent level. We argue that, in many important applications and contexts, complex large-scale collaborative multi-agent systems need to be able to learn and adapt at multiple organization, hierarchical and logical levels. In particular, the agents need to be able to learn both at the level of individual agents and at the system or agent ensemble levels, and then to integrate these different sources of learned knowledge and behavior, in order to be effective at solving complex tasks in typical dynamic, partially observable and noisy multi-agent environments. In this paper, we describe a conceptual framework for addressing the problem of learning how to coordinate effectively at three qualitatively distinct levels — those of (i) individual agents, (ii) small groups of agents, and (iii) very large agent ensembles (or alternatively, depending on the nature of a multi-agent system, at the system or central control level). We briefly illustrate the applicability and usefulness of the proposed conceptual framework with an example of how it would apply to an important practical coordination problem, namely that of distributed coordination of a large ensemble of unmanned vehicles on a complex multi-task mission.
机译:通常,在分布式AI及其应用中,尤其是在协作式多智能体系统中,多个自治的,分布式认知智能体之间的协调是最具挑战性和普遍存在的问题之一。在多主体协调中,一个特别突出的问题是小组,团队或联盟的形成。文献中发现的解决此问题的绝大部分方法都假定参与联盟形成过程的自治主体之间存在固定的相互作用。此外,代理人实际上能够根据他们过去的互动来学习和适应的大多数先前研究主要集中于单个代理人层面的强化学习技术。我们认为,在许多重要的应用程序和上下文中,复杂的大规模协作式多代理系统需要能够在多个组织,层次结构和逻辑级别上学习和适应。尤其是,代理需要能够在单个代理级别以及系统或代理集成级别上学习,然后整合这些学习知识和行为的不同来源,以便有效地解决复杂的任务。典型的动态,部分可观察且嘈杂的多主体环境。在本文中,我们描述了一个概念框架,用于解决学习如何在三个质量上不同的层次上进行有效协调的问题-(i)单个代理,(ii)小型代理,以及(iii)非常大的代理集合(或替代,取决于多代理系统的性质,在系统或中央控制级别)。我们以一个如何将其应用到重要的实际协调问题上的例子为例,简要说明了所提出的概念框架的适用性和实用性,即在复杂的多任务任务中对大型无人驾驶车辆进行分布式协调的问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号