首页> 外文会议>Annual meeting of the special interest group on discourse and dialogue >Sequential Dialogue Context Modeling for Spoken Language Understanding
【24h】

Sequential Dialogue Context Modeling for Spoken Language Understanding

机译:顺序对话语境建模用于语言理解

获取原文

摘要

Spoken Language Understanding (SLU) is a key component of goal oriented dialogue systems that would parse user utterances into semantic frame representations. Traditionally SLU does not utilize the dialogue history beyond the previous system turn and contextual ambiguities are resolved by the downstream components. In this paper, we explore novel approaches for modeling dialogue context in a recurrent neural network (RNN) based language understanding system. We propose the Sequential Dialogue Encoder Network, that allows encoding context from the dialogue history in chronological order. We compare the performance of our proposed architecture with two context models, one that uses just the previous turn context and another that encodes dialogue context in a memory network, but loses the order of utterances in the dialogue history. Experiments with a multi-domain dialogue dataset demonstrate that the proposed architecture results in reduced semantic frame error rates.
机译:口语语言理解(SLU)是面向目标的对话系统的关键组成部分,可以解析用户语言帧表示。传统上,SLU不利用超出以前的系统转弯的对话历史,并通过下游组件解决上下文歧义。在本文中,我们探索了在经常性神经网络(RNN)语言理解系统中建模对话背景的新方法。我们提出了顺序对话编码器网络,允许以时间顺序从对话历史编码上下文。我们将建议体系结构的性能与两个上下文模型进行比较,其中仅使用先前的旋转上下文,另一个在内存网络中编码对话背景,但在对话历史中丢失了话语顺序。具有多域对话数据集的实验表明,所提出的架构导致语义帧错误率降低。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号