...
首页> 外文期刊>Neuropsychologia >Cortical processing of phonetic and emotional information in speech: A cross-modal priming study
【24h】

Cortical processing of phonetic and emotional information in speech: A cross-modal priming study

机译:语音中语音和情感信息的皮层处理:跨模式启动研究

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The current study employed behavioral and electrophysiological measures to investigate the timing, localization, and neural oscillation characteristics of cortical activities associated with phonetic and emotional information processing of speech. The experimental design used a cross-modal priming paradigm in which the normal adult participants were presented a visual prime followed by an auditory target. Primes were facial expressions that systematically varied in emotional content (happy or angry) and mouth shape (corresponding to /a/ or /i/ vowels). Targets were spoken words that varied by emotional prosody (happy or angry) and vowel (/a/ or /i/). In both the phonetic and prosodic conditions, participants were asked to judge congruency status of the visual prime and the auditory target. Behavioral results showed a congruency effect for both percent correct and reaction time. Two ERP responses, the N400 and late positive response (LPR), were identified in both conditions. Source localization and inter-trial phase coherence of the N400 and LPR components further revealed different cortical contributions and neural oscillation patterns for selective processing of phonetic and emotional information in speech. The results provide corroborating evidence for the necessity of differentiating brain mechanisms underlying the representation and processing of co-existing linguistic and paralinguistic information in spoken language, which has important implications for theoretical models of speech recognition as well as clinical studies on the neural bases of language and social communication deficits. (C) 2016 Elsevier Ltd. All rights reserved.
机译:当前的研究采用行为和电生理学方法来研究与语音的语音和情感信息处理有关的皮质活动的时间,位置和神经振荡特征。实验设计使用了一种跨模式的启动模式,在该模式中,正常成年人参加者被呈现了视觉启动,然后是听觉目标。素数是面部表情,在情感上(快乐或生气)和嘴形(对应于/ a /或/ i /元音)在系统上有所不同。目标对象是因情绪韵律(开心或生气)和元音(/ a /或/ i /)而异的口头表达。在语音和韵律条件下,要求参与者判断视觉素数和听觉目标的一致性状态。行为结果显示正确率和反应时间的一致性效果。在这两种情况下,均确定了两个ERP响应,即N400和晚期阳性响应(LPR)。 N400和LPR组件的源定位和试验间相位相干性进一步揭示了用于选择性处理语音中的语音和情感信息的皮质贡献和神经振荡模式。该结果提供了有力的证据,证明有必要区分以口头语言表示和处理共存语言和副语言信息的大脑机制的基础,这对语音识别的理论模型以及基于语言的神经基础的临床研究具有重要意义。和社会沟通不足。 (C)2016 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号