首页> 美国卫生研究院文献>Heliyon >Measuring students proficiency in MOOCs: multiple attempts extensions for the Rasch model
【2h】

Measuring students proficiency in MOOCs: multiple attempts extensions for the Rasch model

机译:衡量学生在MOOC中的熟练程度:Rasch模型的多次尝试扩展

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Popularity of online courses with open access and unlimited student participation, the so-called massive open online courses (MOOCs), has been growing intensively. Students, professors, and universities have an interest in accurate measures of students' proficiency in MOOCs. However, these measurements face several challenges: (a) assessments are dynamic: items can be added, removed or replaced by a course author at any time; (b) students may be allowed to make several attempts within one assessment; (c) assessments may include an insufficient number of items for accurate individual-level conclusions. Therefore, common psychometric models and techniques of Classical Test Theory (CTT) and Item Response Theory (IRT) do not serve perfectly to measure proficiency. In this study we try to cover this gap and propose cross-classification multilevel logistic extensions of the common IRT model, the Rasch model, aimed at improving the assessment of the student's proficiency by modeling the effect of attempts and by involving non-assessment data such as student's interaction with video lectures and practical tasks. We illustrate these extensions on the logged data from one MOOC and check the quality using a cross-validation procedure on three MOOCs. We found that (a) the performance changes over attempts depend on the student: whereas for some students performance ameliorates, for other students, the performance might deteriorate; (b) similarly, the change over attempts varies over items; (c) student's activity with video lectures and practical tasks are significant predictors of response correctness in a sense of higher activity leads to higher chances of a correct response; (d) overall accuracy of prediction of student's item responses using the extensions is 6% higher than using the traditional Rasch model. In sum, our results show that the approach is an improvement in assessment procedures in MOOCs and could serve as an additional source for accurate conclusions on student's proficiency.
机译:具有开放访问权限和无限制学生参与的在线课程的流行,即所谓的大规模开放在线课程(MOOC),正在迅速增长。学生,教授和大学对准确衡量学生在MOOC中的熟练程度感兴趣。但是,这些度量面临着几个挑战:(a)评估是动态的:课程作者可以随时添加,删除或替换项目; (b)可以允许学生在一项评估中进行几次尝试; (c)评估可能包括数量不足的项目,无法得出准确的个人结论。因此,经典测验理论(CTT)和项目反应理论(IRT)的常见心理测量模型和技术无法完美地衡量熟练程度。在本研究中,我们试图弥补这一空白,并提出通用IRT模型Rasch模型的交叉分类多层次Logistic扩展,旨在通过对尝试的效果进行建模并纳入非评估数据来改进对学生熟练程度的评估。作为学生与视频讲座和实际任务的互动。我们在来自一个MOOC的记录数据上说明了这些扩展,并在三个MOOC上使用交叉验证过程检查了质量。我们发现(a)尝试中的成绩变化取决于学生:而某些学生的成绩有所改善,而另一些学生的成绩可能会下降; (b)类似地,尝试的变更因项目而异; (c)学生的视频讲座和实际任务的活动是反应正确性的重要预测因素,因为较高的活动度导致正确反应的机会更高; (d)使用扩展功能预测学生的项目反应的总体准确性比使用传统的Rasch模型高出6%。总而言之,我们的结果表明,该方法是对MOOC中评估程序的一种改进,并且可以作为准确总结学生能力的另一来源。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号