We introduce a novel unsupervised algorithm for text segmentation. We re-conceptualize text segmentation as a graph-partitioning task aiming to optimize the normalized-cut criterion. Central to this framework is a contrastive analysis of lexical distribution that simultaneously optimizes the total similarity within each segment and dissimilarity across segments. Our experimental results show that the normalized-cut algorithm obtains performance improvements over the state-of-the-art techniques on the task of spoken lecture segmentation. Another attractive property of the algorithm is robustness to noise. The accuracy of our algorithm does not deteriorate significantly when applied to automatically recognized speech. The impact of the novel segmentation framework extends beyond the text segmentation domain. We demonstrate the power of the model by applying it to the segmentation of raw acoustic signal without intermediate speech recognition.
展开▼