首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks
【24h】

Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks

机译:重新布置熟悉的系统:测试递归网络中的组合泛化

获取原文
获取原文并翻译 | 示例

摘要

Systematic compositionality is the ability to recombine meaningful units with regular and predictable outcomes, and it's seen as key to the human capacity for generalization in language. Recent work (Lake and Baroni, 2018) has studied systematic compositionality in modern seq2seq models using generalization to novel navigation instructions in a grounded environment as a probing tool. Lake and Baroni's main experiment required the models to quickly bootstrap the meaning of new words. We extend this framework here to settings where the model needs only to re-combine well-trained functional words (such as "around" and "right") in novel contexts. Our findings confirm and strengthen the earlier ones: seq2seq models can be impressively good at generalizing to novel combinations of previously-seen input, but only when they receive extensive training on the specific pattern to be generalized (e.g., generalizing from many examples of "X around right" to "jump around right"), while failing when generalization requires novel application of compositional rules (e.g., inferring the meaning of "around right" from those of "right" and "around").
机译:系统组成是指将有意义的单元重组为具有规则且可预测的结果的能力,并且它被认为是人类语言泛化能力的关键。最近的工作(Lake和Baroni,2018)已经研究了现代seq2seq模型中的系统组成,使用了在地面环境中对新型导航指令的泛化作为探测工具。 Lake和Baroni的主要实验要求模型快速引导新单词的含义。在这里,我们将此框架扩展到模型仅需要在新颖的上下文中重新组合训练有素的功能词(例如\“ around \”和\“ right \”)的设置。我们的发现证实并加强了早期的发现:seq2seq模型可以很好地推广到以前见过的输入的新颖组合,但是只有当他们接受了关于要推广的特定模式的广泛训练时(例如,从许多\\的例子中进行推广)。 X绕右移”到“绕右移”),但在泛化要求新颖应用合成规则时会失败(例如,从“右绕”和“绕右移”的含义中推论“绕右绕”的含义\“)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号