【24h】

Reducing Annotation Efforts in Supervised Short Answer Scoring

机译:减少监督短答评分的注释努力

获取原文

摘要

Automated short answer scoring is increasingly used to give students timely feedback about their learning progress. Building scoring models comes with high costs, as state-of-the-art methods using supervised learning require large amounts of hand-annotated data. We analyze the potential of recently proposed methods for semi-supervised learning based on clustering. We find that all examined methods (centroids, all clusters, selected pure clusters) are mainly effective for very short answers and do not generalize well to several-sentence responses.
机译:自动简短的答案评分越来越多地用于让学生及时反馈他们的学习进度。建筑评分型号具有高成本,因为使用监督学习的最先进的方法需要大量的手动注释数据。我们分析基于聚类的半监督学习方法最近提出的方法的潜力。我们发现所有检查的方法(质心,所有集群,选定的纯集群)主要有效地对非常短的答案,并且对几句响应并不概括。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号