首页> 外文期刊>Journal of vision >A song of scenes & sentences: signatures of shared cortical resources between visual perception and language revealed by representational similarity analysis
【24h】

A song of scenes & sentences: signatures of shared cortical resources between visual perception and language revealed by representational similarity analysis

机译:场景与句子之歌:通过表征相似性分析揭示视觉和语言之间共享皮层资源的特征

获取原文
           

摘要

Previous imaging studies, investigating the domain specificity of cortical networks, have indicated some common principles of processing across different cognitive functions and therefore shared cortical resources, e.g. the processing of hierarchical structures ("syntax") or contextual meaning ("semantics"). Whereas the majority of research focused on language and music, recent studies also emphasized comparable principles in visual perception. However, little is known about the degree of possibly shared cortical resources between vision and language. To overcome this existing gap, we created a paradigm consisting of two modalities, visual (natural scenes) and auditory (sentences) stimuli, equally divided into consistent, semantically inconsistent, and syntactically inconsistent. Twenty participants either viewed images or listened to sentences while BOLD-responses were recorded. We assessed cortical activation patterns for semantic and syntactic language processing, applying the general linear model in each participant's native space, thus creating participant and semantic/syntax specific functional ROIs (pfROIs). Subsequently we conducted a representational similarity analysis (RSA) within those pfROIs including activation patterns from all conditions and modalities to investigate the relationship between activation patterns of language and visual perception more closely. Both language conditions activated the expected left-lateralized networks, compromising IFG, STS/MTG and IPL (semantic) and IFG, as well as STS/STG (syntax). RSA in all pfROIs revealed distinguishable patterns between modalities. Focusing on the patterns more closely we found highest similarities across modalities for both, semantic and syntactic processing in their respective pfROIs. In particular, the semantic pfROIs showed highest similarity between the activation patterns for semantic processing of language and vision, whereas syntactic processing revealed most similar activation patterns in the syntactic pfROIs. These results underline a specific and distinct processing for semantic and syntax, additionally giving a first insight on common principles between vision and language, as the resulting activation patterns for either semantic or syntax processing were most similar across modalities.
机译:先前的影像学研究调查了皮质网络的域特异性,已经指出了跨不同认知功能进行加工的一些通用原则,因此共享了皮质资源,例如层次结构(“语法”)或上下文含义(“语义”)的处理。尽管大多数研究集中在语言和音乐上,但最近的研究也强调了视觉感知方面的可比原理。然而,关于视觉和语言之间可能共享的皮质资源的程度知之甚少。为了克服这种现有的差距,我们创建了一个范式,包括视觉,自然场景和听觉(句子)两种刺激,均分为一致,语义不一致和句法不一致。记录BOLD响应的同时,有20位参与者观看了图像或听了句子。我们评估了用于语义和句法语言处理的皮质激活模式,在每个参与者的本机空间中应用了通用线性模型,从而创建了参与者和特定于语义/语法的功能性ROI(pfROI)。随后,我们在这些pfROI中进行了代表相似性分析(RSA),包括来自所有条件和模态的激活模式,以更紧密地研究语言的激活模式与视觉感知之间的关系。两种语言条件都激活了预期的左侧网络,从而损害了IFG,STS / MTG和IPL(语义)和IFG以及STS / STG(语法)。所有pfROI中的RSA揭示了模态之间的可区分模式。更加仔细地关注模式,我们发现在各自的pfROI中,语义和句法处理的所有模式之间的相似性最高。尤其是,语义pfROI在语言和视觉语义处理的激活模式之间显示出最高的相似性,而句法处理则揭示了在pfROI语法中最相似的激活模式。这些结果强调了对语义和语法的特定而独特的处理,此外,对于视觉和语言之间的通用原理也有了首次见解,因为所得到的用于语义或语法处理的激活模式在各个模态中最为相似。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号