We demonstrate practical dialogue management techniques for dialogues involving multiple concurrent tasks or activities. Conversational context for concurrent activities is computed using a "Dialogue Move Tree" and an "Activity Tree" which represent multiple interleaved threads of dialogue about different activities and their execution status. Dialogue "threading" also allows the dynamic use of multiple recognition language models, depending on dialogue context ― resulting in faster, more robust recognition. We also demonstrate the incremental message selection, aggregation, and generation methods employed in this context. The domain of this demonstration is conversational interaction with a robot helicopter, or UAV ('Unmanned Aerial Vehicle') (Doherty et al., 2000). The same dialogue management system is also being used for intelligent tutoring applications, and "in-car" dialogues. This type of application domain is more complex and demanding than the usual information-seeking applications deployed commercially (e.g. ATIS). In particular, interactions with such a system are not scriptable in advance, rely on mixed-initiative in conversation, and may be about multiple interleaved tasks. In such 'practical' dialogues (Allen et al., 2001) we wish to communicate with devices about their possible actions, their plans, goals, and the tasks they are currently attempting. For these reasons we built a dialogue manager that represents (possibly collaborative) activities and their execution status, and tracks multiple threads of dialogue about concurrent and planned activities. A layer of abstraction to "activity models" also allows us to construct a domain-general dialogue move engine, which uses application-specific activity models.
展开▼