【24h】

Automatic Fake News Detection: Are Models Learning to Reason?

机译:自动假新闻检测:模特是学习的推理吗?

获取原文

摘要

Most fact checking models for automatic fake news detection are based on reasoning: given a claim with associated evidence, the models aim to estimate the claim veracity based on the supporting or refuting content within the evidence. When these models perform well, it is generally assumed to be due to the models having learned to reason over the evidence with regards to the claim. In this paper, we investigate this assumption of reasoning, by exploring the relationship and importance of both claim and evidence. Surprisingly, we find on political fact checking datasets that most often the highest effectiveness is obtained by utilizing only the evidence, as the impact of including the claim is either negligible or harmful to the effectiveness. This highlights an important problem in what constitutes evidence in existing approaches for automatic fake news detection.
机译:大多数事实检查用于自动假新闻检测的模型是基于推理:给出了与相关证据的索赔,模型旨在根据证据中的支持或矫正内容估算索赔准确性。 当这些模型表现良好时,通常假设是由于索赔的证据学会了所学到的模型。 在本文中,我们通过探索索赔和证据的关系和重要性来调查推理的这种假设。 令人惊讶的是,我们发现政治事实检查数据集,即通过仅利用证据而获得最高效果的数据集,因为包括索赔的影响是可忽略或对效力有害的影响。 这突出了在现有的自动假新闻检测方法中构成证据的重要问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号