首页> 外文会议>Nordic conference of computational Linguistics >Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions
【24h】

Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions

机译:句法接受预测的句法和语义表示语言建模

获取原文

摘要

In this paper, we investigate the effect of enhancing lexical embeddings in LSTM language models (LM) with syntactic and semantic representations. We evaluate the language models using perplexity, and we evaluate the performance of the models on the task of predicting human sentence acceptability judgments. We train LSTM language models on sentences automatically annotated with universal syntactic dependency roles (Nivre et al., 2016), dependency tree depth features, and universal semantic tags (Abzianidze et al., 2017) to predict sentence acceptability judgments. Our experiments indicate that syntactic depth and tags lower the perplexity compared to a plain LSTM language model, while semantic tags increase the perplexity. Our experiments also show that neither syntactic nor semantic tags improve the performance of LSTM language models on the task of predicting sentence acceptability judgments.
机译:在本文中,我们研究了使用语法和语义表示来增强LSTM语言模型(LM)中的词法嵌入的效果。我们使用困惑度评估语言模型,并评估模型在预测人类句子可接受性判断的任务上的性能。我们在自动标注具有通用句法依赖项角色(Nivre等人,2016),依赖树深度特征和通用语义标签(Abzianidze等人,2017)的句子上训练LSTM语言模型,以预测句子的可接受性判断。我们的实验表明,与普通的LSTM语言模型相比,句法深度和标签降低了困惑度,而语义标签则增加了困惑度。我们的实验还表明,句法和语义标签都无法提高LSTM语言模型在预测句子可接受性判断的任务上的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号