...
首页> 外文期刊>Expert Systems with Application >Word n-gram attention models for sentence similarity and inference
【24h】

Word n-gram attention models for sentence similarity and inference

机译:单词n-gram注意模型用于句子相似度和推理

获取原文
获取原文并翻译 | 示例
           

摘要

Semantic Textual Similarity and Natural Language Inference are two popular natural language understanding tasks used to benchmark sentence representation models where two sentences are paired. In such tasks sentences are represented as bag of words, sequences, trees or convolutions, but the attention model is based on word pairs. In this article we introduce the use of word n-grams in the attention model. Our results on five datasets show an error reduction of up to 41% with respect to the word-based attention model. The improvements are especially relevant with low data regimes and, in the case of natural language inference, on the recently released hard subset of Natural Language Inference datasets. (C) 2019 Elsevier Ltd. All rights reserved.
机译:语义文本相似性和自然语言推理是两个流行的自然语言理解任务,用于对两个句子配对的句子表示模型进行基准测试。在此类任务中,句子表示为单词,序列,树或卷积的包,但是注意力模型基于单词对。在本文中,我们介绍了单词n-gram在注意力模型中的使用。我们在五个数据集上的结果显示,与基于单词的注意力模型相比,错误减少率高达41%。这些改进尤其适用于低数据体制,并且在自然语言推理的情况下,对于最近发布的自然语言推理数据集的硬子集也是如此。 (C)2019 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号