In this paper, we present a segmentation algorithm for acoustic musical signals, using a hidden Markov model.Through unsupervised learning, we discover regions in the music that present steady statistical properties: textures.We investigate different front-ends for the system, and compare their performances. We then show that the obtainedsegmentation often translates a structure explained by musicology: chorus and verse, different instrumental sections,etc. Finally, we discuss the necessity of the HMM and conclude that an efficient segmentation of music is more thana static clustering and should make use of the dynamics of the data.
展开▼