Neural representation can be induced without external stimulation, such as in mental imagery. Our previous study found that imagined speaking and imagined hearing modulated perceptual neural responses in opposite directions, suggesting motor-to-sensory transformation and memory retrieval as two separate routes that induce auditory representation (). We hypothesized that the precision of representation induced from different types of speech imagery led to different modulation effects. Specifically, we predicted that the one-to-one mapping between motor and sensory domains established during speech production would evoke a more precise auditory representation in imagined speaking than retrieving the same sounds from memory in imagined hearing. To test this hypothesis, we built the function of representational precision as the modulation of connection strength in a neural network model. The model fitted the magnetoencephalography (MEG) imagery repetition effects, and the best-fitting parameters showed sharper tuning after imagined speaking than imagined hearing, consistent with the representational precision hypothesis. Moreover, this model predicted that different types of speech imagery would affect perception differently. In an imagery-adaptation experiment, the categorization of /ba/-/da/ continuum from male and female human participants showed more positive shifts towards the preceding imagined syllable after imagined speaking than imagined hearing. These consistent simulation and behavioral results support our hypothesis that distinct mechanisms of speech imagery construct auditory representation with varying degrees of precision and differentially influence auditory perception. This study provides a mechanistic connection between neural-level activity and psychophysics that reveals the neural computation of mental imagery.
展开▼
机译:在没有外部刺激的情况下,例如在精神影像中,可以诱导神经表征。我们以前的研究发现,想象的说话和想象的听觉在相反的方向上调节了知觉神经反应,提示运动到感官的转换和记忆的检索是诱导听觉表示的两条单独的途径。我们假设不同类型的语音图像引起的表示精度导致不同的调制效果。具体而言,我们预测在语音产生过程中建立的运动域和感觉域之间的一对一映射将比在想象的听力中从记忆中检索相同的声音唤起想象的语音中更精确的听觉表示。为了验证该假设,我们在神经网络模型中建立了表示精度作为连接强度调制的功能。该模型拟合了脑磁图(MEG)的图像重复效果,并且最合适的参数在想象的说话后比想象的听觉显示出更清晰的调整,与表示精度假设相符。此外,该模型预测,不同类型的语音图像会不同地影响感知。在图像适应实验中,来自男性和女性参与者的/ ba /-/ da /连续体的分类显示,在想象的说话之后向想象的音节比想象的听觉更积极地转移。这些一致的模拟和行为结果支持了我们的假设,即语音图像的不同机制会以不同的精度构建听觉表示,并会不同程度地影响听觉感知。这项研究提供了神经活动和心理物理学之间的机械联系,揭示了心理意象的神经计算。
展开▼