首页> 美国卫生研究院文献>Frontiers in Psychology >Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition
【2h】

Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition

机译:倒退吗?语音中的时间顺序如何影响语音情感识别的时间过程

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly; Pell and Kotz, ). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated syllable-by-syllable from the offset rather than the onset of the stimulus). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell and Kotz (). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners' accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400–1200 ms time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech.
机译:最近的研究表明,识别语音中基本情感的语音表达的时间过程因情感类型而异,这意味着听众发现语音中语音情感不同的声学证据(例如,恐惧的识别速度最快,而幸福和厌恶的识别速度相对较高)慢慢地;佩尔和科兹()。为了调查语音情感的识别是否主要取决于听众接触语音的时间或发音中关键情感线索的位置,40位英语参与者判断了门控范式中出现的受情感影响的假发音的含义,其中语音是根据其音节结构在从语音结束起持续时间增加的段中进行门控的(即,从偏移量开始而不是从刺激的开始逐个音节地门控音节)。分析了在每种门条件下检测六种目标情绪的准确性以及以毫秒为单位的每种情绪的平均识别点,并将其与Pell和Kotz的结果进行了比较。我们再次发现,在准确识别语音韵律所需要的时间方面,存在特定于情感的显着差异,并且有新证据表明,与发声初始音节相比,发声最终音节在许多情况下都倾向于提高听众的准确性。语音的门控方式并不影响识别语音提示中的恐惧,愤怒,悲伤和中立的时间,尽管当听众首先听到语音结束时,幸福和厌恶的识别速度明显加快。我们的数据提供了有关在400-1200 ms时间窗内识别语音表达情感的相对时间过程的新线索,同时强调了韵律的情感识别可以由语音的时间特性来塑造。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号