首页> 外文学位 >Serving CS Formative Feedback on Assessments Using Simple and Practical Teacher-Bootstrapped Error Models
【24h】

Serving CS Formative Feedback on Assessments Using Simple and Practical Teacher-Bootstrapped Error Models

机译:使用简单实用的教师自举错误模型为评估提供CS形成反馈

获取原文
获取原文并翻译 | 示例

摘要

The demand for computing education in post-secondary education is growing. However, teaching staff hiring is not keeping pace, leading to increasing class sizes. As computers are becoming ubiquitous, classes are following suit by increasing their use of technology. These two defining factors of scaled classes require us to reconsider teaching practices that originated in small classes with little technology. Rather than seeing scaled classes as a problem that needs management, we propose it is an opportunity that lets us collect and analyze large, high dimensional data sets and enables us to conduct experiments at scale.;One way classes are increasing their use of technology is moving content delivery and assessment administration online. Massive Open Online Courses (MOOCs) have taken this to an extreme by delivering all material online, having no face-to-face interaction, and allowing the class to include thousands of students at once. To understand how this changes the information needs of the teacher, we surveyed MOOC teachers and compared our results to prior work that ran similar surveys among teachers of smaller online courses. While our results were similar, we did find that the MOOC teachers surveyed valued qualitative data -- such as forum activity and student surveys -- more than quantitative data such as grades. The potential reason for these results is that teachers found quantitative data insufficient to monitor class dynamics, such as problems with course material and student thought processes. They needed a source of data that required less upfront knowledge of what the teacher wanted to look for and how to find it. With such data, their understanding of the students and class situation could be more holistic.;Since qualitative data such as forum activity and surveys have an inherent selection bias, we focused on required, constructed-response assessments in the course. This reduced selection bias had the advantages of needing less upfront knowledge and focused attention on measuring how well students are learning the material. Also, since MOOCs have a high proportion of auditors, we moved to studying a large local class to have a complete sample.;We applied qualitative and quantitative methods to analyze wrong answers from constructed- response, code-tracing question sets delivered through an automated grading system. Using emergent coding, we defined tags to represent ways that a student might arrive at a wrong answer and applied them to our data set. Since what we identified as frequent wrong answers occurred at a much higher rate than infrequent wrong answers, we found that analyzing only these frequent wrong answers provides a representative overview of the data. In addition, a content expert is more likely to be able to tag a frequent wrong answer than a random wrong answer.;Using the wrong answer to tag(s) association, we built a student error model and designed a hint intervention within the automated grading system. We deployed an in situ experiment in a large introductory computer science course to understand the effectiveness of parameters in the model and compared two different kinds of hints: reteaching and knowledge integration [28]. A reteaching hint re-explained the concept(s) associated with the tag. A knowledge integration hint focused on pushing the student in the right direction without re-explaining anything, such as reminding them of a concept or asking them to compare two aspects of the assessment. We found it was straightforward to implement and deploy our intervention experiment because of the existing class technology. In addition, for our model, we found co-occurrence provides useful information to propagate tags to wrong answers that we did not inspect. However, we were unable to find evidence that our hints improved student performance on post-test questions compared to no hints at all. Therefore, we performed a preliminary, exploratory analysis to understand potential reasons why our results are null and to inform future work.;We believe scaled classes are a prime opportunity to study learning. This work is an example of how to take advantage of this chance by first collecting and analyzing data from a scaled class and then deploying a scaled in situ intervention by using the scaled class's technology. With this work, we encourage other researchers to take advantage of scaled classes and hope it can serve as a starting point for how to do so.
机译:大专教育对计算教育的需求正在增长。但是,教学人员的聘用跟不上步伐,导致班级人数增加。随着计算机无处不在,随着人们对技术的使用越来越多,班级也在紧随其后。规模班级的这两个决定性因素要求我们重新考虑源于技术含量低的小班级的教学实践。我们认为,这不是让规模化的类成为需要管理的问题,而是让我们有机会收集和分析大型,高维数据集,并使我们能够进行大规模的实验。在线移动内容交付和评估管理。大规模开放式在线课程(MOOC)通过在线提供所有资料,没有面对面的互动,并允许班上一次包含数千名学生,将这一点发挥到了极致。为了了解这如何改变老师的信息需求,我们对MOOC老师进行了调查,并将我们的研究结果与以前在较小的在线课程的老师中进行过类似调查的工作进行了比较。尽管我们的结果相似,但我们确实发现,MOOC老师对有价值的定性数据(例如论坛活动和学生调查)进行了调查,而不是对成绩等定量数据进行了调查。这些结果的潜在原因是教师发现定量数据不足以监控课堂动态,例如课程材料和学生思维过程的问题。他们需要一个数据源,该数据源需要较少的教师想要查找的内容以及如何查找的先验知识。有了这些数据,他们对学生和班级情况的了解就会更加全面。;由于诸如论坛活动和调查之类的定性数据具有内在的选择偏见,因此我们在课程中着重于要求的,构建的反应评估。减少选择偏见的优点是不需要先验知识,而将注意力集中在衡量学生对材料的学习程度上。此外,由于MOOC的审核员比例很高,我们着手研究大型本地课程以获取完整的样本。;我们使用定性和定量方法来分析通过自动传递的构造响应,代码跟踪问题集的错误答案评分标准。使用紧急编码,我们定义了标签来表示学生可能得出错误答案的方式,并将其应用于我们的数据集。由于我们确定为常见错误答案的发生率比不常见错误答案的发生率高得多,因此我们发现仅分析这些常见错误答案可提供代表性的数据概览。此外,内容专家比随机的错误答案更有可能标记频繁的错误答案。;使用错误的标记关联答案,我们建立了学生错误模型,并在自动化系统中设计了提示干预措施评分标准。我们在大型计算机科学入门课程中进行了现场实验,以了解模型中参数的有效性,并比较了两种不同的提示:重新教学和知识集成[28]。重新提示提示重新解释了与标签关联的概念。知识整合提示的重点是在不重新解释任何内容的情况下将学生推向正确的方向,例如提醒他们一个概念或要求他们比较评估的两个方面。由于现有的类技术,我们发现实施和部署干预实验非常简单。此外,对于我们的模型,我们发现共现提供了有用的信息,可以将标签传播到我们未检查的错误答案。但是,我们无法找到证据表明,与完全没有提示相比,我们的提示可以提高学生在测试后问题上的表现。因此,我们进行了初步的探索性分析,以了解导致结果无效的潜在原因,并为以后的工作提供参考。这项工作是一个如何利用这一机会的示例,该方法是首先从缩放的类收集和分析数据,然后使用缩放的类的技术部署缩放的原位干预。通过这项工作,我们鼓励其他研究人员利用按比例缩放的班级,并希望它可以作为这样做的起点。

著录项

  • 作者单位

    University of California, Berkeley.;

  • 授予单位 University of California, Berkeley.;
  • 学科 Computer science.;Education.
  • 学位 Ph.D.
  • 年度 2017
  • 页码 149 p.
  • 总页数 149
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

  • 入库时间 2022-08-17 11:39:00

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号