Typical everyday situations usually contain a large number of sound sources. In virtual reality applications, where the processing demands for the acoustical rendering of a scene should be kept a low level, it is challenging to simulate and spatialize a high number of virtual sound sources. This work presents different solutions for rendering sound sources in virtual scenes with varying level of interactivity and complexity. For a binaural free-field auralization of up to hundreds of virtual sound sources, a model based on k-means clustering was recently developed, with the main objective to limit the number of required convolutions. To improve the perceptual quality of the rendering, the model was extended with an efficient correction of the interaural time difference of each virtual sound source. In addition to a brief benchmark analysis of the rendering module, this work also describes how this clustering approach was integrated in the open source auralization framework Virtual Acoustics.
展开▼