...
首页> 外文期刊>journal of artificial general intelligence >What’s Next if Reward is Enough? Insights for AGI from Animal Reinforcement Learning
【24h】

What’s Next if Reward is Enough? Insights for AGI from Animal Reinforcement Learning

机译:如果奖励足够,下一步是什么?来自动物强化学习的 AGI 见解

获取原文
获取原文并翻译 | 示例

摘要

Abstract There has been considerable recent interest in the “The Reward is Enough” hypothesis, which is the idea that agents can develop general intelligence even with simple reward functions, provided the environment they operate in is sufficiently complex. While this is an interesting framework to approach the AGI problem, it also brings forth new questions - what kind of RL algorithm should the agent use? What should the reward function look like? How can it quickly generalize its learning to new tasks? This paper looks to animal reinforcement learning - both individual and social - to address these questions and more. It evaluates existing computational models and neural substrates of Pavlovian conditioning, reward-based action selection, intrinsic motivation, attention-based task representations, social learning and meta-learning in animals and discusses how insights from these findings can influence the development of animal-level AGI within an RL framework.
机译:摘要 最近,人们对“奖励就足够了”假说产生了浓厚的兴趣,该假说认为,只要智能体所处的环境足够复杂,即使具有简单的奖励功能,也可以发展出一般智能。虽然这是一个解决AGI问题的有趣框架,但它也带来了新的问题 - 代理应该使用什么样的RL算法?奖励函数应该是什么样子的?它如何快速将其学习推广到新任务中?本文着眼于动物强化学习 - 包括个体和社会 - 来解决这些问题以及更多问题。它评估了巴甫洛夫条件反射、基于奖励的行动选择、内在动机、基于注意力的任务表示、社会学习和动物元学习的现有计算模型和神经基质,并讨论了这些发现的见解如何影响动物水平 AGI 在 RL 框架内的发展。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号