首页> 外文会议>2012 IEEE Workshop on Spoken Language Technology. >Ecological validity and the evaluation of speech summarization quality
【24h】

Ecological validity and the evaluation of speech summarization quality

机译:生态效度与语音总结质量评估

获取原文
获取原文并翻译 | 示例

摘要

There is little evidence of widespread adoption of speech summarization systems. This may be due in part to the fact that the natural language heuristics used to generate summaries are often optimized with respect to a class of evaluation measures that, while computationally and experimentally inexpensive, rely on subjectively selected gold standards against which automatically generated summaries are scored. This evaluation protocol does not take into account the usefulness of a summary in assisting the listener in achieving his or her goal. In this paper we study how current measures and methods for evaluating summarization systems compare to human-centric evaluation criteria. For this, we have designed and conducted an ecologically valid evaluation that determines the value of a summary when embedded in a task, rather than how closely a summary resembles a gold standard. The results of our evaluation demonstrate that in the domain of lecture summarization, the well-known baseline of maximal marginal relevance [1] is statistically significantly worse than human-generated extractive summaries, and even worse than having no summary at all in a simple quiz-taking task. Priming seems to have no statistically significant effect on the usefulness of the human summaries. This is interesting because priming had been proposed as a technique for increasing kappa scores and/or maintaining goal orientation among summary authors. In addition, our results suggest that ROUGE scores, regardless of whether they are derived from numerically-ranked reference data or ecologically valid human-extracted summaries, may not always be reliable as inexpensive proxies for task-embedded evaluations. In fact, under some conditions, relying exclusively on ROUGE may lead to scoring human-generated summaries very favourably even when a task-embedded score calls their usefulness into question relative to using no summaries at all.
机译:几乎没有证据表明语音摘要系统被广泛采用。这可能部分是由于以下事实:用于生成摘要的自然语言启发法通常针对一类评估方法进行了优化,这些评估方法虽然在计算和实验上不昂贵,但依赖于主观选择的黄金标准,以此为基础对自动生成的摘要进行评分。该评估协议未考虑摘要在帮助听众实现其目标方面的有用性。在本文中,我们研究了当前评估汇总系统的方法和方法与以人为本的评估标准的比较。为此,我们设计并进行了生态上有效的评估,该评估确定摘要在嵌入任务中时的价值,而不是摘要与黄金标准的相似程度。我们的评估结果表明,在演讲总结的领域中,众所周知的最大边际相关性基线[1]在统计上比人类生成的摘要要差得多,甚至比在简单测验中根本没有摘要要差。任务。引物似乎对人类摘要的有用性没有统计学上的显着影响。这很有趣,因为已经提出了将启动作为增加摘要作者中kappa分数和/或保持目标定向的技术。此外,我们的结果表明,无论ROUGE分数是从数字排名的参考数据中获得的,还是从生态上有效的人工提取的摘要中得出的,都不能总是作为可靠的廉价任务嵌入式评估的可靠代表。实际上,在某些情况下,即使当任务嵌入的分数相对于根本不使用摘要时,仅依靠ROUGE可能也会非常有利地对人工生成的摘要评分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号