首页> 外文会议>Second annual meeting of the Society for Computation in Linguistics >Do RNNs learn human-like abstract word order preferences?
【24h】

Do RNNs learn human-like abstract word order preferences?

机译:RNN是否学习类人抽象词序偏好?

获取原文
获取原文并翻译 | 示例

摘要

RNN language models have achieved state-of-the-art results on various tasks, but what exactly they are representing about syntax is as yet unclear. Here we investigate whether RNN language models learn humanlike word order preferences in syntactic alternations. We collect language model surprisal scores for controlled sentence stimuli exhibiting major syntactic alternations in English: heavy NP shift, particle shift, the dative alternation, and the genitive alternation. We show that RNN language models reproduce human preferences in these alternations based on NP length, an-imacy, and definiteness. We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations. We show that the RNNs' performance is similar to the human acceptability ratings and is not matched by an n-gram baseline model. Our results show that RNNs learn the abstract features of weight, animacy, and definiteness which underlie soft constraints on syntactic alternations.
机译:RNN语言模型已在各种任务上取得了最新的成果,但是它们在语法方面的确切含义仍不清楚。在这里,我们调查RNN语言模型是否在句法交替中学习类人单词的顺序偏好。我们收集用于控制句刺激的语言模型惊奇分数,这些刺激表现出英语的主要句法变化:重音NP移位,质点移位,和格变化和和格变化。我们显示RNN语言模型在基于NP长度,无生气和确定性的这些交替中再现人类的喜好。我们在第一个可接受性判断实验中直接操纵句法变化的预测因子,从而收集人类对我们刺激的可接受性等级。我们表明,RNN的性能类似于人类可接受的等级,并且与n克基线模型不匹配。我们的结果表明,RNN学习了权重,生气勃勃和确定性的抽象特征,这些抽象特征是对句法交替的软约束。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号