首页> 外文会议>Conference on empirical methods in natural language processing >Limitations in learning an interpreted language with recurrent models
【24h】

Limitations in learning an interpreted language with recurrent models

机译:学习具有经常性模型的解释语言的限制

获取原文

摘要

In this submission I report work in progress on learning simplified interpreted languages by means of recurrent models. The data is constructed to reflect core properties of natural language as modeled in formal syntax and semantics. Preliminary results suggest that LSTM networks do generalise to compositional interpretation, albeit only in the most favorable learning setting.
机译:在此提交中,我通过经常性模型向学习简化的解释语言进行报告。构建数据以将自然语言的核心属性反映为正式语法和语义中的建模。初步结果表明,LSTM网络确实概括为组成解释,尽管仅在最有利的学习环境中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号