【24h】

The Back-translation Score: Automatic MT Evaluation at the Sentence Level without Reference Translations

机译:回译分数:在句子级别自动进行MT评估,无需参考翻译

获取原文

摘要

Automatic tools for machine translation (MT) evaluation such as BLEU are well established, but have the drawbacks that they do not perform well at the sentence level and that they presuppose manually translated reference texts. Assuming that the MT system to be evaluated can deal with both directions of a language pair, in this research we suggest to conduct automatic MT evaluation by determining the orthographic similarity between a back-translation and the original source text. This way we eliminate the need for human translated reference texts. By correlating BLEU and back-translation scores with human judgments, it could be shown that the back-translation score gives an improved performance at the sentence level.
机译:诸如BLEU之类的用于机器翻译(MT)评估的自动工具已经很成熟,但是具有以下缺点:它们在句子级别上表现不佳,并且以手动翻译的参考文本为前提。假设要评估的MT系统可以处理语言对的两个方向,在本研究中,我们建议通过确定回译和原始源文本之间的字形相似性来进行自动MT评估。这样,我们就不需要人工翻译的参考文本。通过将BLEU和反向翻译分数与人类判断相关联,可以证明反向翻译分数在句子级别上具有改进的性能。

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号