Using conventional sound design, the audio signal in virtual reality applications is often rendered as a static stereophonic signal. It is accompanied by a visual signal that allows for interactive behavior such as looking around. In the current test, the influence of spatial offset between the audio and visual signals is investigated using reaction time measurements in a word recognition task. The audio-visual offset is introduced by a video presented at horizontal offset angles between ±21°, accompanied with a static central audio. Measurements are compared to reaction times from a test where both audio and visual signal are presented with the same angle. Results show that audio-visual offsets between 10° and 20° cause significant differences in reaction time compared to spatially matched presentation.
展开▼