Much work has been done in dialogue modeling for spoken and multi-modal human-computer interaction. Problems can arise in situations that do not correspond to the dialogue model. For this reason, we propose information-centered dialogue processing in which the actions to be taken by the dia-logue system are determined as a function of the information available in the discourse, the database and the domain model. In order to arrive at fully specified representations of the intended actions, the specificity of the representations is increased by unification, integrating information from multi-modal input, database access and domain knowledge. Our approach differs from other state-of-the-art systems in that it does not rely on explicit dialogue models. Instead, we show how partial and under-specified representations of the situation can be used in a spoken dialogue system to generate clarification questions and to guide the user to arrive at his or her communicative goal. We show furthermore how probabilistic inforamtion can be used to disambiguate without clarification questions. Evalaution resutls and dialogue examples demonstrate the flexibility and naturalness of our approach.
展开▼