首页> 外文期刊>Cognitive Systems Research >Modeling implicit learning in a cross-modal audio-visual serial reaction time task
【24h】

Modeling implicit learning in a cross-modal audio-visual serial reaction time task

机译:在跨模式视听串行反应时间任务中为隐式学习建模

获取原文
获取原文并翻译 | 示例
           

摘要

This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p 0.004). The learning rates over time differed for both modality and the stimuli within modalities, although there was no correlation to global error rates or reaction time differences between the stimulus types. These results demonstrate a modeling method that is well suited to extract detailed information about the success of implicit learning from high variability data. It further shows a cross-modal implicit learning effect, which extends the understanding of the implicit learning system and highlights the possibility for information to be processed in a cross-modal representation without conscious processing. (C) 2018 Elsevier B.V. All rights reserved.
机译:这项研究检查了在交叉模式条件下的内隐学习,其中视觉和听觉刺激以交替的方式呈现。每个跨模式转换的发生概率为0.85,使参与者能够通过学习颜色和色调之间的跨模式预测信息来获得反应时间收益。随机重新映射运动反应,以确保进行纯知觉学习。通过将五个不同的模型拟合到数据中,可以从数据中提取隐式学习的效果,由于运动可变性,该模型的变化很大。为了检查不同可辨别性和模态的刺激类型的个体学习率,针对每种刺激类型并针对每个参与者分别拟合模型。模型的选择确定了模型,其中包括运动变异性,对偏离者的突击效应和效应发作的序列位置,这是最能说明问题的(赤池权重0.87)。此外,对于可预测的与异常的过渡(40 ms反应时间差,p <0.004),存在显着的全局交叉模态隐性学习效果。模态和模态内的刺激随时间的学习率不同,尽管与总体错误率或刺激类型之间的反应时间差异没有关系。这些结果证明了一种建模方法非常适合从高可变性数据中提取有关隐式学习成功的详细信息。它进一步显示了一种跨模式的隐式学习效果,它扩展了对隐式学习系统的理解,并突出了信息可以在没有意识处理的情况下以跨模式表示进行处理的可能性。 (C)2018 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号