首页> 外文期刊>Computer speech and language >Speaker-Informed time-and-Content-Aware attention for spoken language understanding
【24h】

Speaker-Informed time-and-Content-Aware attention for spoken language understanding

机译:讲者对时间和内容意识的了解,对口语的理解

获取原文
获取原文并翻译 | 示例

摘要

To mitigate the ambiguity of spoken language understanding (SLU) of an utterance, we propose contextual models that can consider the relevant context by using temporal and content-related information effectively. We first propose two axes: 'Awareness' and Attention Level'. Awareness includes three methods that consider the timing or content-similarity of context. The Attention Level includes three methods that consider speaker roles to calculate the importance of each historic utterance. By combining one method from each axis, we build various contextual models. The proposed models are designed to use a dataset to automatically learn the importance of previous utterances in terms of time and content. We also propose various speaker information that would be helpful to improve SLU accuracy. The proposed models achieved state-of-the-art Fl scores in experiments on the Dialog State Tracking Challenge (DSTC) 4 and Loqui benchmark datasets. We applied in-depth analysis to verify that the proposed methods are effective to improve SLU accuracy. The analysis also demonstrated the effectiveness of the proposed methods.
机译:为了减轻话语的语言理解(SLU)的歧义性,我们提出了上下文模型,可以通过有效使用时间和与内容相关的信息来考虑相关上下文。我们首先提出两个轴:“意识”和“注意力水平”。意识包括三种考虑上下文时间或内容相似性的方法。注意级别包括三种方法,这些方法考虑说话者的角色来计算每种历史话语的重要性。通过从每个轴组合一种方法,我们构建了各种上下文模型。提出的模型旨在使用数据集来自动了解时间和内容方面先前说话的重要性。我们还提出了各种演讲者信息,这些信息将有助于提高SLU的准确性。在对话状态跟踪挑战(DSTC)4和Loqui基准数据集的实验中,所提出的模型获得了最新的Fl评分。我们进行了深入的分析,以验证所提出的方法有效地提高了SLU的准确性。分析还证明了所提出方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号