首页> 外文会议>IEEE Conference on Computational Intelligence and Games >Intrinsically motivated reinforcement learning: A promising framework for procedural content generation
【24h】

Intrinsically motivated reinforcement learning: A promising framework for procedural content generation

机译:内在动机的强化学习:程序内容生成的有前途的框架

获取原文

摘要

So far, Evolutionary Algorithms (EA) have been the dominant paradigm for Procedural Content Generation (PCG). While we believe the field has achieved a remarkable success, we claim that there is a wide window for improvement. The field of machine learning has an abundance of methods that promise solutions to some aspects of PCG that are still under-researched. In this paper, we advocate the use of Intrinsically motivated reinforcement learning for content generation. A class of methods that thrive for knowledge for its own sake rather than as a step towards finding a solution. We argue that this approach promises solutions to some of the well-known problems in PCG: (1) searching for novelty and diversity can be easily incorporated as an intrinsic reward, (2) improving models of player experience and generation of adapted content can be done simultaneously through combining extrinsic and intrinsic rewards, and (3) mix-initiative design tools can incorporate more knowledge about the designer and her preferences and ultimately provide better assistance. We demonstrate our arguments and discuss the challenges that face the proposed approach.
机译:到目前为止,进化算法(EA)一直是过程内容生成(PCG)的主要范例。尽管我们相信该领域已经取得了显著成就,但我们声称仍有广阔的改进空间。机器学习领域有很多方法,它们有望为仍在研究中的PCG某些方面提供解决方案。在本文中,我们提倡将内在动机的强化学习用于内容生成。一类为了自身知识而蓬勃发展的方法,而不是寻求解决方案的步骤。我们认为,这种方法有望解决PCG中的一些众所周知的问题:(1)寻找新颖性和多样性可以很容易地作为一种内在奖励而并入;(2)改善玩家体验模型并产生适应性内容可以通过结合外部奖励和内在奖励同时完成;(3)混合创意设计工具可以整合有关设计师及其偏好的更多知识,并最终提供更好的帮助。我们展示我们的论点,并讨论拟议方法所面临的挑战。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号