首页> 外文会议>Conference on empirical methods in natural language processing >Closing Brackets with Recurrent Neural Networks
【24h】

Closing Brackets with Recurrent Neural Networks

机译:与经常性神经网络关闭括号

获取原文

摘要

Many natural and formal languages contain words or symbols that require a matching counterpart for making an expression well-formed. The combination of opening and closing brackets is a typical example of such a construction. Due to their commonness, the ability to follow such rules is important for language modeling. Currently, recurrent neural networks (RNNs) are extensively used for this task. We investigate whether they are capable of learning the rules of opening and closing brackets by applying them to synthetic Dyck languages that consist of different types of brackets. We provide an analysis of the statistical properties of these languages as a baseline and show strengths and limits of Elman-RNNs, GRUs and LSTMs in experiments on random samples of these languages. In terms of perplexity and prediction accuracy, the RNNs get close to the theoretical baseline in most cases.
机译:许多自然和正式的语言包含要求匹配对应物的单词或符号来制造表达良好的表达。打开和关闭支架的组合是这种结构的典型示例。由于它们的常见,遵循这种规则的能力对于语言建模很重要。目前,经常性的神经网络(RNNS)广泛用于此任务。我们调查它们是否能够通过将它们应用于由不同类型的括号组成的合成迪克语言来了解打开和关闭括号的规则。我们对这些语言的统计特性作为基线的统计特性分析,并在这些语言的随机样本的实验中显示Elman-RNN,Grus和LSTM的优势和限制。在困惑和预测准确性方面,在大多数情况下,RNNS接近理论基线。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号