【24h】

Colorless green recurrent networks dream hierarchically

机译:无色绿色经常性网络梦想分层

获取原文
获取外文期刊封面目录资料

摘要

Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. We investigate here to what extent RNNs leam to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our evaluation nonsensical sentences where RNNs cannot rely on semantic or lexical cues ("The colorless green ideas I ate with the chair sleep furiously"), and, for Italian, we compare model performance to human intuitions. Our language-model-trained RNNs make reliable predictions about long-distance agreement, and do not lag much behind human performance. We thus bring support to the hypothesis that RNNs are not just shallow-pattern extractors, but they also acquire deeper grammatical competence.
机译:经常性的神经网络(RNNS)在各种语言处理任务中取得了令人印象深刻的结果,这表明他们可以诱导语言的非琐碎性质。我们在此处调查了RNNS Leam以跟踪摘要分层句法结构的程度。我们测试RNNS是否以四种语言(意大利语,英语,希伯来语,俄语)培训的RNNS培训是否培训,可以预测各种建筑中的长途数量协议。我们包括在我们的评估中的荒谬句子,RNN不能依赖语义或词汇线索(“我用椅子疯狂地睡觉的无色绿色想法”),而且对于意大利语,我们将模型性能与人类直觉进行比较。我们的语言模型培训的RNN是关于长途协议的可靠预测,并且不会落后于人类性能。因此,我们为假设带来了支持,即RNN不仅仅是浅模式提取器,而且还可以获得更深层次的语法能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号