首页> 外文会议>2nd workshop on evaluating vector-space representations for NLP >Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference
【24h】

Recurrent Neural Network-Based Sentence Encoder with Gated Attention for Natural Language Inference

机译:基于递归神经网络的门控句子编码器,用于自然语言推理

获取原文
获取原文并翻译 | 示例

摘要

The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task. This paper describes our system (alpha) that is ranked among the top in the Shared Task, on both the in-domain test set (obtaining a 74.9% accuracy) and on the cross-domain test set (also attaining a 74.9% accuracy), demonstrating that the model generalizes well to the cross-domain data. Our model is equipped with intra-sentence gated-attention composition which helps achieve a better performance. In addition to submitting our model to the Shared Task, we have also tested it on the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy of 85.5%, which is the best reported result on SNLI when cross-sentence attention is not allowed, the same condition enforced in RepEval 2017.
机译:RepEval 2017共享任务旨在评估用于句子表示的自然语言理解模型,其中用神经网络将句子表示为固定长度的向量,并使用自然语言推理任务测试表示的质量。本文介绍了我们的系统(alpha),该系统在域内测试集(获得74.9%的准确性)和跨域测试集(也获得74.9%的准确性)上均在共享任务中名列前茅,表明该模型可以很好地概括跨域数据。我们的模型配备了句内门控注意成分,有助于实现更好的性能。除了将模型提交给“共享任务”之外,我们还在斯坦福自然语言推理(SNLI)数据集上对其进行了测试。我们获得85.5%的准确性,这是SNLI上报告的结果最好,当不允许交叉句子注意时,这与RepEval 2017中强制执行的条件相同。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号