首页> 外文会议>International Conference on Artificial Intelligence in Education >Introducing a Framework to Assess Newly Created Questions with Natural Language Processing
【24h】

Introducing a Framework to Assess Newly Created Questions with Natural Language Processing

机译:引入使用自然语言处理评估新创建的问题的框架

获取原文

摘要

Statistical models such as those derived from Item Response Theory (IRT) enable the assessment of students on a specific subject, which can be useful for several purposes (e.g., learning path customization, drop-out prediction). However, the questions have to be assessed as well and, although it is possible to estimate with IRT the characteristics of questions that have already been answered by several students, this technique cannot be used on newly generated questions. In this paper, we propose a framework to train and evaluate models for estimating the difficulty and discrimination of newly created Multiple Choice Questions by extracting meaningful features from the text of the question and of the possible choices. We implement one model using this framework and test it on a real-world dataset provided by CloudAcademy, showing that it outperforms previously proposed models, reducing by 6.7% the RMSE for difficulty estimation and by 10.8% the RMSE for discrimination estimation. We also present the results of an ablation study performed to support our features choice and to show the effects of different characteristics of the questions' text on difficulty and discrimination.
机译:诸如从项目响应理论(IRT)得出的统计模型可以对特定学科的学生进行评估,这可以用于多种目的(例如,学习路径定制,辍学预测)。但是,也必须对问题进行评估,尽管可以使用IRT来评估已经被多个学生回答的问题的特征,但是该技术不能用于新生成的问题。在本文中,我们提出了一个框架,可以通过从问题和可能选择的文本中提取有意义的特征来训练和评估模型,以估算新创建的多项选择问题的难度和辨别力。我们使用此框架实现了一个模型,并在CloudAcademy提供的真实数据集上对其进行了测试,结果表明它的性能优于先前提出的模型,将难度估计的RMSE降低了6.7%,区分度估计的RMSE降低了10.8%。我们还介绍了进行消融研究的结果,以支持我们的特征选择并显示问题文字的不同特征对难度和歧视的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号