We address the issue of human subjectivitywhen authoring summaries, aimingat a simple, robust evaluation of machinegenerated summaries. Applying a crosscomprehension test on human authoredshort summaries from broadcast news, thelevel of subjectivity is gauged among fourauthors. The instruction set is simple,thus there is enough room for subjectivity.However the approach is robust becausethe test does not use the absolutescore, relying instead on relative comparison,effectively alleviating the subjectivity.Finally we illustrate the application ofthe above scheme when evaluating the informativenessof machine generated summaries.
展开▼