Current vocal output generation systems lack portability andvariability. Text-to-speech synthesis is not well accepted by users forits lack of conviviality. We propose alternative approaches togenerating spoken utterances that fulfill the quality requirements of aspeech output system. Our framework for language generation relies onconceptual segments, we controlled their acoustic realization by meansof four basic methods based on four levels of representation: signalconcatenation, text-to-speech, mimicking synthesis and phonologicalprosodic command driving the speech synthesizer. Examples produced allowa step towards flexible and contextually appropriate generation ofspoken utterances
展开▼