【24h】

Increasing the Upper Bound for the EvoMan Game Competition

机译:增加evoman游戏比赛的上限

获取原文

摘要

This paper describes a comparison between algorithms for evolving agents able to play the game Evoman. Our team took part in the “Evoman: Game-playing Competition for WCCI 2020”, and won second place; beyond finding a good agent to satisfy the requirements of the competition - which aim at a good ability to generalise -, we have surpassed the existing non-general, best-known upper-bound. We have managed to exceed this upper bound with a Proximal Policy Optimization(PPO) algorithm, by discarding the competition requirements to generalise. We also present our other exploratory attempts: Q-learning, Genetic Algorithms, Particle Swarm Optimisation, and their PPO hybridizations. Finally, we map the behaviour of our algorithm in the space of game difficulty, generating plausible extensions to the existing upper-bound.
机译:本文介绍了算法与能够播放游戏evoman的演化代理之间的比较。我们的团队参加了“evoman:WCCI 2020的比赛比赛”,并赢了第二名;除了找到一个良好的代理人来满足竞争的要求 - 这旨在促进概括的能力 - 我们超越了现有的非将无一体,最着名的上限。我们已经设法超出了近端政策优化(PPO)算法的这个上限,通过丢弃竞争要求来概括。我们还提出了我们的其他探索性尝试:Q-学习,遗传算法,粒子群优化及其PPO杂交。最后,我们在游戏难度的空间中映射我们算法的行为,为现有的上限产生合理的扩展。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号