首页> 外文会议>2011 IEEE Conference on Computational Intelligence and Games >Evolving multimodal networks for multitask games
【24h】

Evolving multimodal networks for multitask games

机译:不断发展的多任务网络多模式网络

获取原文
获取外文期刊封面目录资料

摘要

Intelligent opponent behavior helps make video games interesting to human players. Evolutionary computation can discover such behavior, especially when the game consists of a single task. However, multitask domains, in which separate tasks within the domain each have their own dynamics and objectives, can be challenging for evolution. This paper proposes two methods for meeting this challenge by evolving neural networks: 1) Multitask Learning provides a network with distinct outputs per task, thus evolving a separate policy for each task, and 2) Mode Mutation provides a means to evolve new output modes, as well as a way to select which mode to use at each moment. Multitask Learning assumes agents know which task they are currently facing; if such information is available and accurate, this approach works very well, as demonstrated in the Front/Back Ramming game of this paper. In contrast, Mode Mutation discovers an appropriate task division on its own, which may in some cases be even more powerful than a human-specified task division, as shown in the Predator/Prey game of this paper. These results demonstrate the importance of both Multitask Learning and Mode Mutation for learning intelligent behavior in complex games.
机译:聪明的对手行为有助于使电子游戏对人类玩家变得有趣。进化计算可以发现这种行为,尤其是当游戏包含单个任务时。但是,多任务域(其中域内的各个任务各自具有各自的动态和目标)可能对演进提出挑战。本文提出了两种通过发展神经网络来应对这一挑战的方法:1)多任务学习为每个任务提供了具有不同输出的网络,从而为每个任务发展了单独的策略,以及2)模式突变提供了一种发展新输出模式的方法,以及选择每次使用哪种模式的方式。多任务学习假定坐席知道他们当前要面对的任务。如果此类信息可用且准确,则此方法非常有效,如本文的正面/背面夯实游戏所示。相比之下,模式突变会自行发现合适的任务划分,在某些情况下,它可能比人类指定的任务划分更强大,如本文的“捕食者/猎物”游戏所示。这些结果证明了多任务学习和模式突变对于学习复杂游戏中的智能行为的重要性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号