【24h】

MIND: A CONTEXT-BASED MULTIMODAL INTERPRETATION FRAMEWORK IN CONVERSATIONAL SYSTEMS

机译:思想:会话系统中基于上下文的多模态解释框架

获取原文
获取原文并翻译 | 示例

摘要

In a multimodal human-machine conversation, user inputs are often abbreviated or imprecise. Simply fusing multimodal inputs together may not be sufficient to derive a complete understanding of the inputs. Aiming to handle a wide variety of multimodal inputs, we are building a context-based multimodal interpretation framework called MIND (Multimodal Interpreter for Natural Dialog). MIND is unique in its use of a variety of contexts, such as domain context and conversation context, to enhance multimodal interpretation. In this chapter, we first describe a fine-grained semantic representation that captures salient information from user inputs and the overall conversation, and then present a context-based interpretation approach that enables MIND to reach a full understanding of user inputs, including those abbreviated or imprecise ones.
机译:在多模式人机对话中,用户输入通常被缩写或不精确。仅仅将多模式输入融合在一起可能不足以完全理解输入。为了处理各种各样的多模式输入,我们正在建立一个基于上下文的多模式解释框架,称为MIND(自然对话的多模式解释器)。 MIND在使用多种上下文(例如域上下文和对话上下文)以增强多模式解释方面是独一无二的。在本章中,我们首先描述一种细粒度的语义表示,该语义表示从用户输入和整个对话中获取重要信息,然后提出一种基于上下文的解释方法,使MIND可以完全理解用户输入,包括缩写或输入。不精确的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号