首页> 外文会议>Conference on empirical methods in natural language processing >Further Investigation into Reference Bias in Monolingual Evaluation of Machine Translation
【24h】

Further Investigation into Reference Bias in Monolingual Evaluation of Machine Translation

机译:进一步调查机器翻译单机评估中的参考偏差

获取原文

摘要

Monolingual evaluation of Machine Translation (MT) aims to simplify human assessment by requiring assessors to compare the meaning of the MT output with a reference translation, opening up the task to a much larger pool of genuinely qualified evaluators. Monolingual evaluation runs the risk, however, of bias in favour of MT systems that happen to produce translations superficially similar to the reference and, consistent with this intuition, previous investigations have concluded monolingual assessment to be strongly biased in this respect. On re-examination of past analyses, we identify a series of potential analytical errors that force some important questions to be raised about the reliability of past conclusions, however. We subsequently carry out further investigation into reference bias via direct human assessment of MT adequacy via quality controlled crowd-sourcing. Contrary to both intuition and past conclusions, results show no significant evidence of reference bias in monolingual evaluation of MT.
机译:机器翻译的单机评估(MT)旨在通过要求评估员将MT输出的含义与参考翻译进行比较,将任务开放到更大的真正合格的评估员的任务。单向性评估运行风险,然而,有利于遇到的MT系统的偏见,恰好在产生的翻译与参考的翻译和,与这种直觉一致,先前的调查已经结束了在这方面强烈偏见的单机评估。在重新审查过去分析中,我们确定了一系列潜在的分析误差,以强制一些重要问题,以提出关于过去结论的可靠性。我们随后通过质量受控人群采购通过直接人性评估进行进一步调查参考偏见。与直觉和过去的结论相反,结果显示了在单机评估MT中的参考偏差没有显着证据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号