首页> 外文期刊>Frontiers in Psychology >Musical Expertise Affects Audiovisual Speech Perception: Findings From Event-Related Potentials and Inter-trial Phase Coherence
【24h】

Musical Expertise Affects Audiovisual Speech Perception: Findings From Event-Related Potentials and Inter-trial Phase Coherence

机译:音乐专业知识会影响视听语音看法:从事事件相关的潜力和试验间相干关系的发现

获取原文
           

摘要

In audiovisual speech perception, visual information from a talker´s face during mouth articulation is available before the onset of the corresponding audio speech, and thereby allows the perceiver to use visual information to predict the upcoming audio. This prediction, from phonetically congruent visual information, modulates audiovisual speech perception and leads to a decrease in N1 and P2 amplitudes and latency compared to perception of audio speech alone. Whether audiovisual experience, such as with musical training, influences this prediction is unclear, but if so, may explain some of the variations observed in previous research. The current study addresses whether audiovisual speech perception is affected by musical training, first assessing N1 and P2 event-related potentials (ERPs) and in addition, inter-trial phase coherence (ITPC). Musicians and non-musicians are presented the syllable, /ba/ in audio only (AO), video only (VO) and audiovisual (AV) conditions. With the predictory effect of mouth movement isolated from the AV speech (AV-VO), results showed that, compared to audio speech, the two groups are similar. Both groups have lower N1 latency and P2 amplitude and latency. Moreover, they also showed lower ITPCs in delta, theta, and beta band in audiovisual speech perception. However, musicians showed significant suppression of N1 amplitude and desynchronization in alpha band in audiovisual speech, not present for non-musicians. Collectively, the current findings indicate that early sensory processing can be modified by musical experience, which in turn can explain some of the variations in previous AV speech perception research.
机译:在视听语音看法中,在对应音频语音的开始之前,可以在舌头铰接期间从谈话者面部的视觉信息,从而允许感知者使用视觉信息来预测即将到来的音频。这种预测从语音全体的视觉信息调制了视听语音感知,并导致N1和P2幅度和延迟的减少与单独的音频语音的感知相比。无论是在乐观训练,影响这种预测的视听经验还不清楚,但如果是的话,可以解释先前研究中观察到的一些变化。目前的研究解决了视听语音感知是否受音乐训练的影响,首先评估N1和P2事件相关的电位(ERP),另外,试验间相干相干(ITPC)。音乐家和非音乐家介绍了音节,/ BA / In Audio(AO),仅限视频(VO)和Audiovisual(AV)条件。随着从AV语音(AV-VO)隔离的口腔运动的预测效果,结果表明,与音频语音相比,两组相似。两个组都具有较低的N1延迟和P2幅度和延迟。此外,他们还展示了Deaiovisual语音感知中的Delta,Theta和Beta乐队中的低ITPC。然而,音乐家在视听演讲中的alpha乐队中表现出显着抑制N1幅度和去同步,而不是非音乐家。集体表明,目前的调查结果表明,可以通过音乐体验修改早期感官处理,这反过来可以解释先前的AV语音感知研究中的一些变化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号