首页> 外文会议>Annual meeting of the Association for Computational Linguistics >End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions
【24h】

End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions

机译:通过句子间互动的深入探索,以最终任务为导向的文本蕴涵

获取原文

摘要

This work deals with SciTail. a natural entailment challenge derived from a multi-choice question answering proh-lem. The premises and hypotheses in SciTail were generated with no awareness of each other, and did not specifically aim at the entailment task. This makes it more challenging than other entailment data sets and more directly useful to the end-task - question answering. We propose DeIsTe (deep explorations of inter-sentence interactions for textual entailmenl) for this entailment task. Given word-to-word interactions between the premise-hypothesis pair (P, H), DeIsTe consists of: (ⅰ) a parameter-dynamic convolution to make important words in P and H play a dominant role in learnt representations; and (ⅱ) a position-aware attentive convolution to encode the representation and position information of the aligned word pairs. Experiments show that DeIsTe gets ≈5% improvement over prior state of the art and that the pretrained DeIsTe on SciTail generalizes well on RTE-5.
机译:这项工作涉及SciTail。从多项选择题回答问题中得出的自然需求挑战。 SciTail中的前提和假设是在彼此不了解的情况下生成的,并且并非专门针对蕴含任务。这使它比其他需求数据集更具挑战性,并且对最终任务-问题解答更直接有用。我们提出了DeIsTe(针对文本蕴含的句子间交互的深入探索)。给定前提假设对(P,H)之间的词对词交互,DeIsTe包括:(ⅰ)参数动态卷积,使P和H中的重要词在学习的表示中起主导作用; (ⅱ)位置感知的注意力卷积,以对对齐的单词对的表示和位置信息进行编码。实验表明,DeIsTe比现有技术提高了约5%,并且在SciTail上进行预训练的DeIsTe在RTE-5上具有很好的概括性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号