首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model
【24h】

Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model

机译:神经语言模型的中间投影层中词义的表示

获取原文
获取原文并翻译 | 示例

摘要

Performance in language modelling has been significantly improved by training recurrent neural networks on large corpora. This progress has come at the cost of interpretabil-ity and an understanding of how these architectures function, making principled development of better language models more difficult. We look inside a state-of-the-art neural language model to analyse how this model represents high-level lexico-semantic information. In particular, we investigate how the model represents words by extracting activation patterns where they occur in the text, and compare these representations directly to human semantic knowledge.
机译:通过在大型语料库上训练递归神经网络,可以大大提高语言建模的性能。取得这种进步的代价是可解释性和对这些体系结构的功能的理解,这使得更好地语言模型的原则化开发变得更加困难。我们查看一个最新的神经语言模型,以分析该模型如何表示高级词汇语义信息。特别是,我们通过提取文本中出现的激活模式来研究模型如何表示单词,并将这些表示形式直接与人类语义知识进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号