首页> 外文会议>International Joint Conference on Neural Networks >Integrating self-organizing neural network and Motivated Learning for coordinated multi-agent reinforcement learning in multi-stage stochastic game
【24h】

Integrating self-organizing neural network and Motivated Learning for coordinated multi-agent reinforcement learning in multi-stage stochastic game

机译:将自组织神经网络与动机学习相结合,以进行多阶段随机博弈的协同多主体强化学习

获取原文
获取外文期刊封面目录资料

摘要

Most non-trivial problems require the coordinated performance of multiple goal-oriented and time-critical tasks. Coordinating the performance of the tasks is required due to the dependencies among the tasks and the sharing of resources. In this work, an agent learns to perform a task using reinforcement learning with a self-organizing neural network as the function approximator. We propose a novel coordination strategy integrating Motivated Learning (ML) and a self-organizing neural network for multi-agent reinforcement learning (MARL). Specifically, we adapt the ML idea of using pain signal to overcome the resource competition issue. Dependency among the agents is resolved using domain knowledge of their dependence. To avoid domineering agents, the task goals are staggered over multiple stages. A stage is completed by attaining a particular combination of task goals. Results from our experiments conducted using a popular PC-based game known as Starcraft Broodwar show goals of multiple tasks can be attained efficiently using our proposed coordination strategy.
机译:大多数非平凡的问题都需要协调执行多个面向目标和时间紧迫的任务。由于任务之间的依赖性和资源共享,因此需要协调任务的性能。在这项工作中,代理人通过使用自组织神经网络作为函数逼近器的强化学习来学习执行任务。我们提出了一种结合了动机学习(ML)和自组织神经网络的多智能体强化学习(MARL)的新颖协调策略。具体而言,我们采用了使用痛苦信号的机器学习思想来克服资源竞争问题。代理之间的依赖关系是使用其依赖关系的领域知识来解决的。为了避免代理人霸气,任务目标分多个阶段进行。通过实现任务目标的特定组合来完成一个阶段。我们使用流行的基于PC的游戏称为Starcraft Broodwar进行的实验结果表明,使用我们提出的协调策略可以有效地实现多个任务的目标。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号