首页> 外文会议>Conference of the European Chapter of the Association for Computational Linguistics >What Do Recurrent Neural Network Grammars Learn About Syntax?
【24h】

What Do Recurrent Neural Network Grammars Learn About Syntax?

机译:递归神经网络语法可从语法中学到什么?

获取原文

摘要

Recurrent neural network grammars (RNNG) are a recently proposed prob-ablistic generative modeling family for natural language. They show state-of-the-art language modeling and parsing performance. We investigate what in formation they learn, from a linguistic perspective, through various ablations to the model and the data, and by aug menting the model with an attention mechanism (GA-RNNG) to enable closer inspection. We find that explicit modeling of composition is crucial for achieving the best performance. Through the attention mechanism, we find that headedness plays a central role in phrasal represen tation (with the model's latent attention largely agreeing with predictions made by hand-crafted head rules, albeit with some important differences). By training grammars without nonterminal labels, we find that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.
机译:递归神经网络语法(RNNG)是最近针对自然语言提出的概率综合生成模型家族。它们显示了最新的语言建模和解析性能。我们从语言的角度,通过对模型和数据的各种消融,并通过使用注意机制(GA-RNNG)扩展模型以进行更仔细的检查,来研究他们从中学到的知识。我们发现,明确的构图建模对于实现最佳性能至关重要。通过注意机制,我们发现头绪在短语表达中起着核心作用(尽管模型的潜在注意力在很大程度上与手工制作的头规则所作的预测一致,尽管有一些重要的区别)。通过训练没有非终结标记的语法,我们发现短语表示最小限度地取决于非终结,为内心假说提供了支持。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号