首页> 外文会议>Workshop on Predicting and Improving Text Readability for Target Reader Populations 2013 >The C-Score - Proposing a Reading Comprehension Metrics as a Common Evaluation Measure for Text Simplification
【24h】

The C-Score - Proposing a Reading Comprehension Metrics as a Common Evaluation Measure for Text Simplification

机译:C分数-提出阅读理解指标作为简化文字的通用评估手段

获取原文
获取原文并翻译 | 示例

摘要

This article addresses the lack of common approaches for text simplification evaluation, by presenting the first attempt for a common evaluation metrics. The article proposes reading comprehension evaluation as a method for evaluating the results of Text Simplification (TS). An experiment, as an example application of the evaluation method, as well as three formulae to quantify reading comprehension, are presented. The formulae produce an unique score, the C-score, which gives an estimation of user's reading comprehension of a certain text. The score can be used to evaluate the performance of a text simplification engine on pairs of complex and simplified texts, or to compare the performances of different TS methods using the same texts. The approach can be particularly useful for the modern crowd-sourcing approaches, such as those employing the Amazon's Mechanical Turk1 or CrowdFlower. The aim of this paper is thus to propose an evaluation approach and to motivate the TS community to start a relevant discussion, in order to come up with a common evaluation metrics for this task.
机译:本文通过介绍对通用评估指标的首次尝试,解决了缺乏用于文本简化评估的通用方法的问题。本文提出阅读理解评估作为一种评估文本简化(TS)结果的方法。提出了作为评估方法示例应用的实验,以及量化阅读理解程度的三个公式。这些公式产生一个唯一的分数C分数,该分数可以估算用户对特定文本的阅读理解程度。该分数可用于评估文本简化引擎对成对的复杂文本和简化文本的性能,或用于比较使用相同文本的不同TS方法的性能。这种方法对于现代的众包方法尤其有用,例如那些采用亚马逊的Mechanical Turk1或CrowdFlower的方法。因此,本文的目的是提出一种评估方法,并激励TS社区开始相关讨论,以便为此任务提出通用的评估指标。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号