...
首页> 外文期刊>Adaptive Behavior >Synthetic learning agents in game-playing social environments
【24h】

Synthetic learning agents in game-playing social environments

机译:游戏性社交环境中的综合学习代理

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

This paper investigates the performance of synthetic agents in playing and learning scenarios in a turn-based zero-sum game and highlights the ability of opponent-based learning models to demonstrate competitive playing performances in social environments. Synthetic agents are generated based on a variety of combinations of some key parameters, such as exploitation-vs-exploration trade-off, learning back-up and discount rates, and speed of learning, and interact over a very large number of games on a grid infrastructure; experimental data is then analysed to generate clusters of agents that demonstrate interesting associations between eventual performance ranking and learning parameters' set-up. The evolution of these clusters indicates that agents with a predisposition to knowledge exploration and slower learning tend to perform better than exploiters, which tend to prefer fast learning. Observing these clusters vis-a-vis the playing behaviours of the agents makes it also possible to investigate how to select opponents best from a group; initial results suggest that good progress and stable evolution arise when an agent faces opponents of increasing capacity, and that an agent with a good learning mechanism set-up progresses better when it faces less favourably set-up agents.
机译:本文研究了回合制零和博弈中合成代理在游戏和学习场景中的表现,并强调了基于对手的学习模型在社交环境中展示竞争性游戏表现的能力。合成代理是根据一些关键参数的各种组合生成的,例如,开发与竞争,权衡取舍,学习后备率和折现率以及学习速度,并在游戏中通过大量游戏进行交互。网格基础设施;然后,对实验数据进行分析以生成代理群集,这些群集展示了最终性能排名和学习参数设置之间的有趣关联。这些集群的演变表明,倾向于知识探索和学习较慢的行为者的表现要好于开发者,后者更倾向于快速学习。通过观察这些群体相对于特工的比赛行为,还可以研究如何从一组中最好地选择对手。初步结果表明,当一个代理面对容量增加的对手时,会出现良好的进步和稳定的发展,而具有良好学习机制设置的代理在面对设置不利的代理时会取得更好的进步。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号