首页> 外文会议>Nordic conference of computational Linguistics >Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions
【24h】

Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions

机译:语言建模与句子可接受性预测的句法和语义表示

获取原文

摘要

In this paper, we investigate the effect of enhancing lexical embeddings in LSTM language models (LM) with syntactic and semantic representations. We evaluate the language models using perplexity, and we evaluate the performance of the models on the task of predicting human sentence acceptability judgments. We train LSTM language models on sentences automatically annotated with universal syntactic dependency roles (Nivre et al., 2016), dependency tree depth features, and universal semantic tags (Abzianidze et al., 2017) to predict sentence acceptability judgments. Our experiments indicate that syntactic depth and tags lower the perplexity compared to a plain LSTM language model, while semantic tags increase the perplexity. Our experiments also show that neither syntactic nor semantic tags improve the performance of LSTM language models on the task of predicting sentence acceptability judgments.
机译:在本文中,我们调查了用句法和语义表示增强LSTM语言模型(LM)中的词汇嵌入的效果。我们使用困惑评估语言模型,我们评估模型对预测人类句子可接受性判断的任务的性能。我们在句子上培训LSTM语言模型,以通用句法依赖关系(Nivre等,2016),依赖树深度特征和通用语义标签(Abzianidze等,2017)预测句子可接受判断。我们的实验表明,与普通LSTM语言模型相比,句法深度和标签降低了困惑,而语义标签增加了困惑。我们的实验还表明,句法和语义标签都没有提高LSTM语言模型对预测句子可接受判断的任务的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号