首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
【24h】

Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items

机译:语言模型可以理解一切吗? LSTM理解负极性项目的能力

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguistics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.
机译:在本文中,我们尝试将神经语言模型的内部工作原理与语言理论联系起来,重点关注形式语言学中充分讨论的复杂现象:(负)极性项。我们简要讨论了有关许可环境的主要假设,这些假设允许负极性项目,并评估神经语言模型在何种程度上能够正确处理此类构造的子集。我们表明,该模型找到了许可上下文与负极性项目之间的关系,并且似乎知道该上下文的范围,这是从句子的分析树中提取的。通过这项研究,我们希望为将正式语言学与深度学习联系起来的其他研究铺平道路。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号