【24h】

Towards Multiagent Meta-Level Control

机译:迈向多代理元级控制

获取原文

摘要

Embedded systems consisting of collaborating agents capable of interacting with their environment are becoming ubiquitous. It is crucial for these systems to be able to adapt to the dynamic and uncertain characteristics of an open environment. In this paper, we argue that multiagent meta-level control (MMLC) is an effective way to determine when this adaptation process should be done and how much effort should be invested in adaptation as opposed to continuing with the current action plan. We describe a reinforcement learning based approach to learn decentralized meta-control policies offline. We then propose to use the learned reward model as input to a global optimization algorithm to avoid conflicting meta-level decisions between coordinating agents. Our initial experiments in the context of NetRads, a multiagent tornado tracking application show that MMLC significantly improves performance in a 3-agent network.
机译:由能够与环境交互的协作代理组成的嵌入式系统正变得无处不在。这些系统必须能够适应开放环境的动态和不确定特性,这一点至关重要。在本文中,我们认为多主体元级别控制(MMLC)是确定何时应执行此适应过程以及应在适应上投入多少精力的有效方法,而不是继续执行当前的行动计划。我们描述了一种基于强化学习的方法来离线学习分散的元控制策略。然后,我们建议使用学习的奖励模型作为全局优化算法的输入,以避免协调代理之间的元级别决策冲突。我们在NetRads(一种多代理龙卷风跟踪应用程序)的上下文中进行的初步实验表明,MMLC显着提高了3代理网络中的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号