首页> 外文期刊>IEEE Transactions on Games >Hierarchical Reinforcement Learning With Monte Carlo Tree Search in Computer Fighting Game
【24h】

Hierarchical Reinforcement Learning With Monte Carlo Tree Search in Computer Fighting Game

机译:蒙特卡洛树搜索在计算机格斗游戏中的分层强化学习

获取原文
获取原文并翻译 | 示例
       

摘要

Fighting games are complex environments where challenging action-selection problems arise. mainly due to a diversity of opponents and possible actions. In this paper, we present the design and evaluation of a fighting player on top of the FightingICE platform that is used in the Fighting Game Artificial Intelligence (FTGAI) competition. Our proposal is based on hierarchical reinforcement learning (HRL) in combination with Monte Carlo tree search (MCTS) designed as options. By using the FightingICE framework, we evaluate our player against state-of-the-art FTGAIs. We train our player against the current FTGAI champion (GigaThunder). The resulting learned policy is comparable with the champion in direct confront in regard to the number of victories, with the advantage of having less need for expert knowledge. We also evaluate the proposed player against the runners-up and show that adaptation to the strategies of each opponent is necessary for building stronger fighting players.
机译:格斗游戏是在复杂的环境中出现挑衅性的动作选择问题。主要是由于对手的多样性和可能采取的行动。在本文中,我们将在FightingICE人工智能(FTGAI)竞赛中使用的FightingICE平台之上,介绍一个格斗玩家的设计和评估。我们的建议基于分层强化学习(HRL)与设计为选项的蒙特卡洛树搜索(MCTS)相结合。通过使用FightingICE框架,我们可以根据最新的FTGAI评估我们的玩家。我们训练我们的玩家对抗当前的FTGAI冠军(GigaThunder)。由此产生的博学政策在胜利数量上可以直接与冠军匹敌,其优点是对专家知识的需求更少。我们还评估了提议的选手与亚军的对比,并表明适应每个对手的策略对于建立更强大的战斗选手是必要的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号