首页> 外文会议>Conference on empirical methods in natural language processing >Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
【24h】

Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

机译:语言模型是否了解任何内容?关于LSTMS理解负极性项目的能力

获取原文

摘要

In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguistics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.
机译:在本文中,我们试图将神经语言模型的内部工作与语言理论联系起来,专注于正式语言学中讨论的复杂现象:(负)极性项目。我们简要讨论了关于许可上下文的领先假设,允许负极性项目并评估神经语言模型具有正确处理这种结构的子集的程度。我们表明该模型在许可上下文和负极性项目之间找到了关系,并且似乎了解我们从句子的解析树中提取的上下文的范围。通过这项研究,我们希望为其他研究将正式语言学联系到深度学习的其他研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号