【24h】

Meta-evaluation of Machine Translation Using Parallel Legal Texts

机译:使用平行法律文本进行机器翻译的元评估

获取原文
获取原文并翻译 | 示例

摘要

In this paper we report our recent work on the evaluation of a number of popular automatic evaluation metrics for machine translation using parallel legal texts. The evaluation is carried out, following a recognized evaluation protocol, to assess the reliability, the strengths and weaknesses of these evaluation metrics in terms of their correlation with human judgment of translation quality. The evaluation results confirm the reliability of the well-known evaluation metrics, BLEU and NIST for English-to-Chinese translation, and also show that our evaluation metric ATEC outperforms all others for Chinese-to-English translation. We also demonstrate the remarkable impact of different evaluation metrics on the ranking of online machine translation systems for legal translation.
机译:在本文中,我们报告了我们最近对使用并行法律文本进行的机器翻译的多种流行自动评估指标进行评估的工作。遵循公认的评估协议进行评估,以评估这些评估指标与人类对翻译质量判断的相关性,以评估其可靠性,优缺点。评估结果证实了著名的评估指标BLEU和NIST在英译汉中的可靠性,并且表明我们的评估指标ATEC在汉英翻译中的表现优于其他所有评估指标。我们还展示了不同评估指标对法律翻译在线机器翻译系统排名的显着影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号