【24h】

BERTese: Learning to Speak to BERT

机译:Bertisese:学会与伯特说话

获取原文

摘要

Large pre-trained language models have been shown to encode large amounts of world and commonsense knowledge in their parameters, leading to substantial interest in methods for extracting that knowledge. In past work, knowledge was extracted by taking manually-authored queries and gathering paraphrases for them using a separate pipeline. In this work, we propose a method for automatically rewriting queries into "BERTese", a paraphrase query that is directly optimized towards better knowledge extraction. To encourage meaningful rewrites, we add auxiliary loss functions that encourage the query to correspond to actual language tokens. We empirically show our approach outperforms competing baselines, obviating the need for complex pipelines. Moreover, BERTese provides some insight into the type of language that helps language models perform knowledge extraction.
机译:已经显示大型预训练的语言模型在他们的参数中编码大量世界和顽强的知识,这导致对提取该知识的方法进行大量兴趣。 在过去的工作中,通过使用单独的管道拍摄手动撰写的查询和收集释义来提取知识。 在这项工作中,我们提出了一种将查询自动重写为“BERTESE”的方法,这是一种直接针对更好的知识提取的解释查询。 为了鼓励有意义的重写,我们添加辅助损耗函数,鼓励查询对应于实际的语言令牌。 我们经验展示了我们的方法优于竞争基线,避免了对复杂管道的需求。 此外,BERTESE为有助于语言模型执行知识提取的语言类型提供了一些洞察力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号