A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing in the inner ear, or cochlea. The model was shown to account for various aspects of spectro-temporal processing and perception in tasks of intensity discrimination, tone-in-noise detection, forward masking, spectral masking and amplitude modulation detection. Secondly, a series of experiments was performed aimed at experimentally characterizing the effects of cochlear damage on listeners' auditory processing, in terms of sensitivity loss and reduced temporal and spectral resolution. The results showed that listeners with comparable audiograms can have very different estimated cochlear input-output functions, frequency selectivity, intensity discrimination limens and effects of simultaneous- and forward masking. Part of the measured data was used to adjust the parameters of the stages in the model, that simulate the cochlear processing. The remaining data were used to evaluate the fitted models. It was shown that an accurate simulation of cochlear input-output functions, in addition to the audiogram, played a major role in accounting both for sensitivity and supra-threshold processing. Finally, the model was used as a front-end in a framework developed to predict consonant discrimination in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners. It was shown that most observations in the measured consonant discrimination error patterns were predicted by the model, although error rates were systematically underestimated by the model in few particular acoustic-phonetic features. These results reflect a relation between basic auditory processing deficits and reduced speech perception performance in the listeners with cochlear hearing loss. Overall, this work suggests a possible explanation of the variability in consequences of cochlear hearing loss. The proposed model might be an interesting tool for, e.g., evaluation of hearing-aid signal processing.
展开▼