Audio production is moving towards an object-based approach, where content is represented as audio together with metadata that describe the sound scene. From current object definitions, it would usually be expected that the audio portion of the object is free from interfering sources. This poses a potential problem for object-based capture, where microphones cannot be placed close to a source. In this paper, the application of microphone array beamforming is investigated for its ability to separate a mixture into distinct audio objects. Real mixtures recorded by a 48 channel microphone array in reflective rooms were separated, and the results were evaluated using perceptual models in addition to physical measures based on the beam pattern. The effect of interfering objects was reduced by applying the beamforming techniques.
展开▼