首页> 外文OA文献 >Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text
【2h】

Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text

机译:从大脑读取视觉体现的含义:基于视觉的计算模型解码由文字引起的视觉对象的心理意象

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

© 2015 Elsevier Inc. Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations.
机译:©2015 Elsevier Inc.体现理论预测,目标词的心理意象会吸收与目标感知有关的神经回路。常规思维中存在的视觉图像的程度及其在大脑中的编码方式很大程度上未知。我们测试参与者阅读对象名称引起的fMRI活动模式是否包括具体的视觉对象表示,以及是否可以使用基于新颖计算图像的语义模型对表示进行解码。我们首先将图像模型与基于文本的语义模型结合使用,以测试不同大脑区域中语义表示的视觉特异性的预测。代表性相似性分析证实,腹侧颞叶和枕骨外侧区域内的fMRI结构与图像模型之间的相关性最强,而文本模型与后顶叶/侧面颞侧/下额叶区域的相关性更好。我们使用一种无​​监督解码算法,该算法利用在图像模型和大脑数据集中发现的表示相似性结构中的共性来对包含的视觉表示进行高精度分类(8/10),然后将其扩展为利用模型组合来对不同的大脑区域进行可靠地解码平行。通过捕获潜在的视觉语义结构,我们的模型为分析过去的感知经验而不是刺激驱动的大脑活动所产生的神经表示提供了一条途径。我们的结果还证明了将多模式数据组合到类似人的语义表示中的好处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号