首页> 外文期刊>Journal of cognitive neuroscience >Hippocampal Contribution to Probabilistic Feedback Learning: Modeling Observation- and Reinforcement-based Processes
【24h】

Hippocampal Contribution to Probabilistic Feedback Learning: Modeling Observation- and Reinforcement-based Processes

机译:海马体对概率反馈学习的贡献:基于观察和强化过程的建模

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Simple probabilistic reinforcement learning is recognized asa striatum-based learning system, but in recent years, has alsobeen associated with hippocampal involvement. This studyexamined whether such involvement may be attributed toobservation-based learning (OL) processes, running in parallelto striatum-based reinforcement learning. A computationalmodel of OL, mirroring classic models of reinforcement-basedlearning (RL), was constructed and applied to the neuroimagingdata set of Palombo, Hayes, Reid, and Verfaellie 2019.Hippocampal contributions to value-based learning: Convergingevidence from fMRI and amnesia. Cognitive, Affective BehavioralNeuroscience, 19(3), 523–536. Results suggested that OLprocesses may indeed take place concomitantly to reinforcementlearning and involve activation of the hippocampus andcentral orbitofrontal cortex. However, rather than independentmechanisms running in parallel, the brain correlates of the OLand RL prediction errors indicated collaboration between systems,with direct implication of the hippocampus in computationsof the discrepancy between the expected and actualreinforcing values of actions. These findings are consistent withprevious accounts of a role for the hippocampus in encoding thestrength of observed stimulus–outcome associations, withupdating of such associations through striatal reinforcementbasedcomputations. In addition, enhanced negative RL predictionerror signaling was found in the anterior insula with greateruse of OL over RL processes. This result may suggest an additionalmode of collaboration between the OL and RL systems,implicating the error monitoring network.
机译:简单概率强化学习被认为是一种基于纹状体的学习系统,但近年来,也与海马体的参与有关。本研究检查了这种参与是否可归因于基于观察的学习(OL)过程,该过程与基于纹状体的强化学习并行运行。构建了OL的计算模型,反映了基于强化的学习(RL)的经典模型,并将其应用于Palombo,Hayes,Reid和Verfaellie的神经影像数据集[2019。海马体对基于价值的学习的贡献:来自 fMRI 和健忘症的融合证据。认知,情感和行为神经科学,19(3),523-536]。结果表明,OL过程可能确实与强化学习同时发生,并涉及海马体和中央眶额叶皮层的激活。然而,OL和RL预测误差的大脑相关性不是并行运行的独立机制,而是表明系统之间的协作,海马体直接影响到动作的预期和实际强化值之间的差异计算。这些发现与之前关于海马体在编码观察到的刺激-结果关联的强度方面的作用的描述一致,并通过基于纹状体强化的计算来更新这种关联。此外,在前岛叶中发现增强的负 RL 预测错误信号,更多地使用 OL 而不是 RL 过程。这一结果可能表明OL和RL系统之间有一种额外的协作模式,暗示了错误监测网络。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号