首页> 外文会议>Conference on empirical methods in natural language processing >What do RNN Language Models Learn about Filler-Gap Dependencies?
【24h】

What do RNN Language Models Learn about Filler-Gap Dependencies?

机译:RNN语言模型如何了解填充间隙依赖项?

获取原文

摘要

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn. Here we investigate whether state-of-the-art RNN language models represent longdistance filler-gap dependencies and constraints on them. Examining RNN behavior on experimentally controlled sentences designed to expose filler-gap dependencies, we show that RNNs can represent the relationship in multiple syntactic positions and over large spans of text. Furthermore, we show that RNNs learn a subset of the known restrictions on filler-gap dependencies, known as island constraints: RNNs show evidence for wh-islands, adjunct islands, and complex NP islands. These studies demonstrates that state-of-the-art RNN models are able to learn and generalize about empty syntactic positions.
机译:RNN语言模型取得了最先进的困惑结果,并在一套NLP任务中证明是有用的,但目前尚不清楚他们学习的句法概括。在这里,我们研究了最先进的RNN语言模型是否代表了它们的长型填充间隙依赖性和约束。在专门控制的旨在暴露填充间隙依赖项的实验控制句子上检查RNN行为,我们表明RNN可以代表多个句法位置和大量文本的大跨度。此外,我们表明,RNNS学习已知限制的填充间隙依赖性的子集,称为岛屿约束:RNNS显示WH岛,辅助群岛和复杂的NP岛屿的证据。这些研究表明,最先进的RNN模型能够学习和概括空的句法位置。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号