首页> 美国卫生研究院文献>other >Choice as a Function of Reinforcer Hold: From Probability Learning to Concurrent Reinforcement
【2h】

Choice as a Function of Reinforcer Hold: From Probability Learning to Concurrent Reinforcement

机译:选择增强保持功能:从概率学习到并行增强

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Two procedures commonly used to study choice are concurrent reinforcement and probability learning. Under concurrent-reinforcement procedures, once a reinforcer is scheduled, it remains available indefinitely until collected. Therefore reinforcement becomes increasingly likely with passage of time or responses on other operanda. Under probability learning, reinforcer probabilities are constant and independent of passage of time or responses. Therefore a particular reinforcer is gained or not, on the basis of a single response, and potential reinforcers are not retained, as when betting at a roulette wheel. In the “real” world, continued availability of reinforcers often lies between these two extremes, with potential reinforcers being lost owing to competition, maturation, decay, and random scatter. The authors parametrically manipulated the likelihood of continued reinforcer availability, defined as hold, and examined the effects on pigeons’ choices. Choices varied as power functions of obtained reinforcers under all values of hold. Stochastic models provided generally good descriptions of choice emissions with deviations from stochasticity systematically related to hold. Thus, a single set of principles accounted for choices across hold values that represent a wide range of real-world conditions.
机译:通常用于研究选择的两种程序是并发强化和概率学习。在并发加固程序下,一旦计划了加固器,它将无限期保持可用状态,直到被收集为止。因此,随着时间的流逝或对其他操作的回应,加强工作变得越来越有可能。在概率学习中,增强器的概率是恒定的,并且与时间或响应的经过无关。因此,根据单个响应获得或不获得特定的增强剂,并且如在轮盘赌上进行下注时一样,潜在的增强剂没有保留。在“现实”世界中,增强剂的持续可用性通常介于这两个极端之间,由于竞争,成熟,衰变和随机散布,潜在的增强剂会丢失。作者从参数上控制了增强剂持续供应的可能性(定义为保持),并研究了对鸽子选择的影响。在所有保持值下,选择随获得的增强器的功能而变化。随机模型通常提供对选择排放的良好描述,而偏离随机性的系统性则与持有有关。因此,一套原则解释了代表各种现实条件的保持值之间的选择。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号