...
首页> 外文期刊>Audio, Speech, and Language Processing, IEEE/ACM Transactions on >Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding
【24h】

Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding

机译:使用递归神经网络进行口语填写以理解语言

获取原文
获取原文并翻译 | 示例
           

摘要

Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark. In addition, we compared the approaches on two custom SLU data sets from the entertainment and movies domains. Our results show that the RNN-based models outperform the conditional random field (CRF) baseline by 2% in absolute error reduction on the ATIS benchmark. We improve the state-of-the-art by 0.5% in the Entertainment domain, and 6.7% for the movies domain.
机译:语义空位填充是口语理解(SLU)中最具挑战性的问题之一。在本文中,我们建议将递归神经网络(RNN)用于此任务,并提出几种新颖的体系结构,这些体系结构旨在有效地建模过去和将来的时间依赖性。具体来说,我们实现并比较了几种重要的RNN架构,包括Elman,Jordan和Hybrid变体。为了促进可重复性,我们使用了Theano神经网络工具包来实现这些网络,并在著名的航空旅行信息系统(ATIS)基准上完成了实验。此外,我们比较了来自娱乐和电影领域的两个自定义SLU数据集的方法。我们的结果表明,基于RNN的模型在ATIS基准上的绝对误差减少方面比条件随机字段(CRF)基准好2%。我们将娱乐领域的最新技术水平提高了0.5%,电影领域的最新水平提高了6.7%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号