首页> 外文期刊>Journal of speech, language, and hearing research: JSLHR >Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects
【24h】

Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects

机译:韵律在情感词处理中占据语义:来自跨通道和跨模型速率效应的证据

获取原文
获取原文并翻译 | 示例
           

摘要

Purpose: Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics-prosody Stroop task) and cross-modal audiovisual task (i.e., semantics-prosody-face Stroop task). Method: Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results: Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion: Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
机译:目的:情绪语音通信涉及语言(例如,语义内容)和Paralinguistic(例如,韵律和面部表达)消息的多思考。以前关于情绪语音处理中的语言与平均肺化效应的研究产生了不一致的结果。在这项研究中,我们调查了单独的跨通道听觉中的情感线索的相对感知性,仅限于跨渠道听觉任务(即,语义 - 千翼争夺任务)和跨模型视听任务(即,语义 - Prosody-Face Stroop任务)。方法:三十普尔中国成年人参加了两种普通话中的两种Troop实验。实验1操纵听力配对的情绪韵律(快乐或悲伤)和词汇语义含量在一起和不一致的条件。实验2通过在听觉刺激介绍期间引入视觉面部表情来扩展到跨模型集成的协议。要求参与者根据选择性注意的指导,要求参加每个测试试验的情绪信息。结果:准确性和反应时间数据表明,尽管在实验2中的认知需求和任务复杂程度增加,但韵律比语义含量始终如一的情感词处理,并且不优先于面部表情。虽然同时刺激在两种实验中的表现增强,但实验2的促进效果较小。结论:在一起,结果表明了比语言韵律提示在情感词处理中的突出作用和多思索融合中的一致性促进效应。我们的研究贡献了语言和Paralinguistic消息如何在多师语言处理中融合的语言数据,并为进一步探索与潜在临床应用进一步探索交叉通道/模态情绪集成的大脑机制的基础。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号