首页> 美国卫生研究院文献>PLoS Computational Biology >Dynamics of Trimming the Content of Face Representations for Categorization in the Brain
【2h】

Dynamics of Trimming the Content of Face Representations for Categorization in the Brain

机译:修剪脸部表示内容以进行大脑分类的动力学

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g., the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g., the wide opened eyes in ‘fear’; the detailed mouth in ‘happy’). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300.
机译:为了理解视觉认知,必须确定人脑何时,如何以及以什么信息对视觉输入进行分类。视觉分类始终涉及至少一个早期和晚期:与刺激编码有关的枕颞N170事件相关电位和涉及感知决策的顶P300。在这里,我们试图了解大脑如何在涵盖N170和P300大脑事件的400毫秒时间范围内,将面部表情的表示形式从早期编码转换为后期决策阶段。我们将分类图像技术应用于三名观察者的行为和脑电图数据,他们对情感的七个面部表情进行了分类,并报告了两个主要发现:(1)在400毫秒的时间过程中,面部特征的处理最初在左右枕骨两侧扩展-时域区域,以动态收敛到中心顶区域; (2)同时,信息处理逐渐从对所有空间尺度(例如眼睛)的通用面部特征进行编码转换为仅代表诊断特征的更精细尺度,该行为更丰富的行为有用信息(例如,眼睛睁大了)。 “恐惧”;“高兴”中详细的嘴巴)。我们的发现表明,大脑在处理的前400毫秒内会通过对N170上的特征进行彻底的编码来完善其视觉类别的诊断表示,从而仅留下对P300上的感知决策至关重要的详细信息。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号