首页> 外文会议>International Evoked Potentials Symposium >Cross-modal visual-auditory-somatosensory integration in a multimodal object recognition task in humans
【24h】

Cross-modal visual-auditory-somatosensory integration in a multimodal object recognition task in humans

机译:人类多模式对象识别任务中的跨模型视觉 - 躯体感应集成

获取原文

摘要

EEG was recorded in a visual-auditory-somatosensory oddball reaction time task to study the relationship of cortical cross-modal processing and reaction time. Visual, auditory and somatosensory stimuli were presented alone and simultaneously in four experimental sessions. Target stimuli were applied in the visual modality to study cross-modal effects on object recognition. Subjects' task was to indicate the recognition of the target by pressing a button. EEG was recorded from 31 scalp electrodes. Significant decrease in reaction time confirmed that multisensory integration took place in multimodal stimulus condition. Recognition of the target was significantly unproved in audiovisual and audio-somatosensory-visual conditions reflected by significantly decreased reaction time compared with unimodal visual and somatosensory-visual conditions. Analysis of event-related potentials revealed that P300 latency showed clear relationship to behavioral data. Results indicate that audiovisual cross-modal integration is more efficacious in visual object recognition task than somatosensory-visual integration.
机译:EEG被记录在视觉听觉 - 躯体传感器奇怪的反应时间任务中,以研究皮质跨模态加工和反应时间的关系。在四个实验会话中单独呈现视觉,听觉和躯体感觉刺激。在视觉模型中应用目标刺激,以研究对物体识别的跨模型效应。受试者的任务是通过按下按钮来表示对目标的识别。从31个头皮电极记录脑电图。反应时间的显着降低证实,在多模式刺激条件下进行多症集成。在视听和音频 - 躯体感觉 - 视觉条件下,识别靶标明显未经证实,与单峰视觉和躯体感官的视觉条件相比,通过显着降低的反应时间反映。与事件相关电位分析显示P300延迟显示出与行为数据的明显关系。结果表明,在视觉对象识别任务中,视听跨模型集成比Somatosency-Visual Integration更具有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号