This paper describes a crowdsourcing experiment on the annotation of plot-like structures in English news articles. The CrowdTruth methodology and metrics have been applied to select valid annotations from the crowd. We further run an in-depth analysis of the annotated data by comparing it with available expert data. Our results show a valuable use of crowdsourcing annotations for such complex semantic tasks, and promote a new annotation approach that combines crowd and experts.
展开▼