...
首页> 外文期刊>NeuroImage >A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes
【24h】

A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes

机译:早期视觉区域的体素编码模型解码记忆场景的心理图像

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from lowlevel visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain-machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery. (C) 2014 The Authors. Published by Elsevier Inc.
机译:最近的多体素模式分类(MVPC)研究表明,在早期视觉皮层中,脑部成像过程中脑活动的模式类似于感知过程中所产生的活动的模式。该发现暗示在心理成像期间对低级视觉特征(例如空间,空间频率和方向)进行了编码。但是,使用MVPC难以直接测试在心理成像过程中对低级视觉特征进行编码的特定假设。当考虑复杂的多对象场景的表示时,困难尤其严重,这些场景可能会引起与低级视觉特征不同的多种变化源。因此,我们使用了基于体素的建模和解码方法来直接测试以下假设:在复杂场景的心理成像过程中,低级视觉特征是在活动中进行编码的。使用通过观察照片诱发的皮质活动的fMRI测量,我们构建了调整为低级视觉特征的体素方式编码模型。我们还根据受试者想像过的先前记忆的艺术品来衡量活动。然后,我们使用编码模型来确定在此活动中编码的假定的低级视觉特征是否可以从数千个其他随机选择的图像中挑选出想像的艺术品。我们表明,以这种方式可以准确地识别出心理图像。此外,心理图像识别的准确性取决于选择用于解码的体素中的低级视觉特征的调整程度。这些结果直接证实了在复杂场景的心理成像过程中对低级视觉特征进行编码的假设。我们的工作还指出了脑机交互的新形式:我们提供了以心理图像为指导的互联网图像搜索的概念验证。 (C)2014作者。由Elsevier Inc.发布

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号