首页> 外文会议>Workshop on Commonsense inference in natural language processing >How Pre-trained Word Representations Capture Commonsense Physical Comparisons
【24h】

How Pre-trained Word Representations Capture Commonsense Physical Comparisons

机译:预训练的单词表示如何捕获常识性的物理比较

获取原文

摘要

Understanding common sense is important for effective natural language reasoning. One type of common sense is how two objects compare on physical properties such as size and weight: e.g., 'is a house bigger than a person?'. We probe whether pre-trained representations capture comparisons and find they, in fact, have higher accuracy than previous approaches. They also generalize to comparisons involving objects not seen during training. We investigate how such comparisons are made: models learn a consistent ordering over all the objects in the comparisons. Probing models have significantly higher accuracy than those baseline models which use dataset artifacts: e.g., memorizing some words are larger than any other word.
机译:理解常识对于有效的自然语言推理很重要。一种常识是两个物体如何在尺寸和重量等物理属性上进行比较:例如,“房子比人大吗?”。我们探究预训练的表示是否捕获比较,并发现它们实际上比以前的方法具有更高的准确性。他们还概括了涉及训练期间未见过的物体的比较。我们研究了如何进行这种比较:模型学习比较中所有对象的一致顺序。探测模型比使用数据集工件的基线模型具有更高的准确性:例如,记忆某些单词比其他任何单词都大。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号