Auditory segmentation is critical for complex auditory pattern processing. We present a generic neural network framework for auditory pattern segmentation. The network is a laterally coupled two-dimensional neural oscillators with a global inhibitor. One dimension represents time and another one represents frequency. We show that this architecture can, in real-time, group auditory features into a segment by phase synchrony and segregate different segments by desynchronization. The network demonstrates the phenomenon that auditory stream segregation critically depends on the rate of presentation. The neuroplausibility and possible extensions of the model are discussed.
展开▼