首页> 外文期刊>Neuropsychologia >Hierarchical neurocomputations underlying concurrent sound segregation: Connecting periphery to percept
【24h】

Hierarchical neurocomputations underlying concurrent sound segregation: Connecting periphery to percept

机译:并发声音隔离的分层神经计算:将外围连接到感知

获取原文
获取原文并翻译 | 示例
       

摘要

Natural soundscapes often contain multiple sound sources at any given time. Numerous studies have reported that in human observers, the perception and identification of concurrent sounds is paralleled by specific changes in cortical event-related potentials (ERPs). Although these studies provide a window into the cerebral mechanisms governing sound segregation, little is known about the subcortical neural architecture and hierarchy of neurocomputations that lead to this robust perceptual process. Using computational modeling, scalp-recorded brainstem/cortical ERPs, and human psychophysics, we demonstrate that a primary cue for sound segregation, i.e., harmonicity, is encoded at the auditory nerve level within tens of milliseconds after the onset of sound and is maintained, largely untransformed, in phase-locked activity of the rostral brainstem. As then indexed by auditory cortical responses, (in)harmonicity is coded in the signature and magnitude of the cortical object-related negativity (ORN) response (150-200 ms). The salience of the resulting percept is then captured in a discrete, categorical-like coding scheme by a late negativity response (N5; similar to 500 ms latency), just prior to the elicitation of a behavioral judgment. Subcortical activity correlated with cortical evoked responses such that weaker phase-locked brainstem responses (lower neural harmonicity) generated larger ORN amplitude, reflecting the cortical registration of multiple sound objects. Studying multiple brain indices simultaneously helps illuminate the mechanisms and time-course of neural processing underlying concurrent sound segregation and may lead to further development and refinement of physiologically driven models of auditory scene analysis. (C) 2014 Elsevier Ltd. All rights reserved.
机译:自然音景通常在任何给定时间包含多个声源。大量研究报告称,在人类观察者中,并发声音的感知和识别与皮层事件相关电位(ERP)的特定变化平行。尽管这些研究为控制声音分离的大脑机制开辟了一个窗口,但对于导致这种强大的知觉过程的皮层下神经结构和神经计算的层次了解甚少。使用计算模型,头皮记录的脑干/皮质ERP和人类心理物理学,我们证明了声音分离的主要提示,即和声,是在声音发作后数十毫秒内在听觉神经水平编码的,并得以维持大部分未转化,处于锁状脑干的活动中。然后,通过听觉皮层反应进行索引,以与皮层物体相关的阴性反应(ORN)响应(150-200毫秒)的特征和幅度编码(共)谐波。然后,在做出行为判断之前,通过后期的负响应(N5;类似于500 ms的等待时间),以离散的类分类编码方案捕获所得感知的显着性。皮层下活动与皮层诱发反应相关,因此较弱的锁相脑干反应(较低的神经谐波)产生较大的ORN振幅,反映了多个声音对象的皮层记录。同时研究多个大脑指标有助于阐明潜在的并发声音分离的神经处理的机制和时程,并可能导致听觉场景分析的生理驱动模型的进一步发展和完善。 (C)2014 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号