首页> 外文会议>AAAI Conference on Artificial Intelligence >Reactive Versus Anticipative Decision Making in a Novel Gift-Giving Game
【24h】

Reactive Versus Anticipative Decision Making in a Novel Gift-Giving Game

机译:在新颖的礼品游戏中有反应与预期决策

获取原文

摘要

Evolutionary game theory focuses on the fitness differences between simple discrete or probabilistic strategies to explain the evolution of particular decision-making behavior within strategic situations. Although this approach has provided substantial insights into the presence of fairness or generosity in gift-giving games, it does not fully resolve the question of which cognitive mechanisms are required to produce the choices observed in experiments. One such mechanism that humans have acquired, is the capacity to anticipate. Prior work showed that forward-looking behavior, using a recurrent neural network to model the cognitive mechanism, are essential to produce the actions of human participants in behavioral experiments. In this paper, we evaluate whether this conclusion extends also to gift-giving games, more concretely, to a game that combines the dictator game with a partner selection process. The recurrent neural network model used here for dictators, allows them to reason about a best response to past actions of the receivers (reactive model) or to decide which action will lead to a more successful outcome in the future (anticipatory model). We show for both models the decision dynamics while training, as well as the average behavior. We find that the anticipatory model is the only one capable of accounting for changes in the context of the game, a behavior also observed in experiments, expanding previous conclusions to this more sophisticated game.
机译:进化博弈论侧重于简单的离散或概率战略之间的健身差异,以解释在战略情况下的特定决策行为的演变。虽然这种方法在礼品游戏中提供了实质性的公平性或慷慨的存在性,但它并没有完全解决所需认知机制以产生实验中所观察到的选择的问题。人类获得的一种这样的机制是预测的能力。在前面的工作表明,使用经常性神经网络来模拟认知机制的前瞻性行为对于产生人类参与者在行为实验中的作用至关重要。在本文中,我们评估了这一结论也延伸到礼品游戏,更具体地说,更具体地说,将独裁游戏与合作伙伴选择过程结合起来的游戏。这里用于独裁者的经常性神经网络模型,使他们能够推理对接收者(反应模型)的过去行动的最佳响应,或者决定哪些动作将导致未来更成功的结果(预期模型)。我们在培训时显示决策动态的模型,以及平均行为。我们发现预期模型是唯一一个能够考虑游戏背景下的变化的模型,在实验中也观察到的行为,向这个更复杂的比赛扩大了先前的结论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号