首页> 外文期刊>Frontiers in Psychology >Considerations in Audio-Visual Interaction Models: An ERP Study of Music Perception by Musicians and Non-musicians
【24h】

Considerations in Audio-Visual Interaction Models: An ERP Study of Music Perception by Musicians and Non-musicians

机译:视听互动模型中的考虑因素:音乐家和非音乐家的音乐感知的ERP研究

获取原文
       

摘要

Previous research with speech and non-speech stimuli suggested that in audiovisual perception, visual information starting prior to the onset of corresponding sound can provide visual cues, and form a prediction about the upcoming auditory sound. This prediction leads to audiovisual (AV) interaction. Auditory and visual perception interact and induce suppression and speeding up of the early auditory event-related potentials (ERPs) such as N1 and P2. To investigate AV interaction, previous research examined N1 and P2 amplitudes and latencies in response to audio only (AO), video only (VO), audiovisual, and control (CO) stimuli, and compared AV with auditory perception based on four AV interaction models (AV vs. AO+VO, AV-VO vs. AO, AV-VO vs. AO-CO, AV vs. AO). The current study addresses how different models of AV interaction express N1 and P2 suppression in music perception. Furthermore, the current study took one step further and examined whether previous musical experience, which can potentially lead to higher N1 and P2 amplitudes in auditory perception, influenced AV interaction in different models. Musicians and non-musicians were presented the recordings (AO, AV, VO) of a keyboard /C4/ key being played, as well as CO stimuli. Results showed that AV interaction models differ in their expression of N1 and P2 amplitude and latency suppression. The calculation of model (AV-VO vs. AO) and (AV-VO vs. AO-CO) has consequences for the resulting N1 and P2 difference waves. Furthermore, while musicians, compared to non-musicians, showed higher N1 amplitude in auditory perception, suppression of amplitudes and latencies for N1 and P2 was similar for the two groups across the AV models. Collectively, these results suggest that when visual cues from finger and hand movements predict the upcoming sound in AV music perception, suppression of early ERPs is similar for musicians and non-musicians. Notably, the calculation differences across models do not lead to the same pattern of results for N1 and P2, demonstrating that the four models are not interchangeable and are not directly comparable.
机译:以前的言语和非语音刺激的研究表明,在视听感知中,在对应声音开始之前开始的视觉信息可以提供视觉提示,并形成关于即将到来的听觉声音的预测。该预测导致视听(AV)互动。听觉和视觉感知相互作用和引起抑制和加速早期听觉事件相关电位(ERP),例如N1和P2。为了调查AV互动,以前的研究检查了响应于音频(AO),视频(VO),视听和控制(CO)刺激的N1和P2幅度和延迟,并基于四个AV交互模型比较AV与听觉感知(AV与AO + VO,AV-VO与AO,AV-VO与AO-CO,AV VS. AO)。目前的研究解决了音乐感知中的AV交互表达N1和P2抑制的不同模型。此外,目前的研究进一步迈出了一步并检查了先前的音乐经验,这是否可能导致听觉感知中的更高N1和P2幅度,影响了不同模型中的AV相互作用。音乐家和非音乐家被介绍了键盘/ C4 /键的录音(AO,AV,VO),以及CO刺激。结果表明,AV相互作用模型在其表达式N1和P2幅度和延迟抑制方面不同。模型(AV-Vo与AO)和(AV-Vo与AO-Co)的计算对所得到的N1和P2差波具有后果。此外,与非音乐家相比,虽然音乐家相比,在听觉感知中显示出更高的N1振幅,但是对于跨越AV模型的两组,N1和P2的幅度和延迟的抑制作用。总的来说,这些结果表明,当来自手指和手动运动的视觉提示预测AV音乐感知中的即将到来的声音时,抑制早期的ERPS类似于音乐家和非音乐家。值得注意的是,跨模型的计算差异不会导致N1和P2的结果的相同模式,表明四种模型不可互换并且不可比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号