【24h】

All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated Text

机译:所有这些'人类'不是黄金:评估生成文本的人类评估

获取原文

摘要

Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run a study assessing nonexperts' ability to distinguish between human-and machine-authored text (GPT2 and GPT3) in three domains (stories, news articles, and recipes). We find that, without training, evaluators distinguished between GPT3- and human-authored text at random chance level. We explore three approaches for quickly training evaluators to better identify GPT3-authored text (detailed instructions, annotated examples, and paired examples) and find that while evaluators' accuracy improved up to 55%, it did not significantly improve across the three domains. Given the inconsistent results across text domains and the often contradictory reasons evaluators gave for their judgments, we examine the role untrained human evaluations play in NLG evaluation and provide recommendations to NLG researchers for improving human evaluations of text generated from state-of-the-art models.
机译:人类评估通常被认为是自然语言生成中的黄金标准,而是随着模型的流畅性提高,评估者如何检测和判断机器生成的文本?我们进行了一项研究,评估了一个非共生的能力,区分了三个域名(故事,新闻文章和食谱)的人员和机器撰写的文本(GPT2和GPT3)。我们发现,没有培训,评估者在随机机会水平随机的GPT3-和人工创作文本之间区分。我们探索三种快速培训评估员的方法,以更好地识别GPT3撰写的文本(详细说明,注释的示例和配对示例),并发现虽然评估员的准确性高达55%,但它在三个域中没有显着改善。鉴于文本领域的不一致结果和经常矛盾的原因评估符到判断,我们审查未经培训的人类评估在NLG评估中发挥的作用,并为NLG研究人员提供建议,以改善最先进的人类评估文本的人类评估楷模。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号