首页> 外文学位 >Applying Item Response Theory methods to design a learning progression-based science assessment.
【24h】

Applying Item Response Theory methods to design a learning progression-based science assessment.

机译:应用项目反应理论方法设计基于学习进度的科学评估。

获取原文
获取原文并翻译 | 示例

摘要

Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1) how to use items in different formats to classify students into levels on the learning progression, (2) how to design a test to give good information about students' progress through the learning progression of a particular construct and (3) what characteristics of test items support their use for assessing students' levels.;Data used for this study were collected from 1500 elementary and secondary school students during 2009--2010. The written assessment was developed in several formats such as the Constructed Response (CR) items, Ordered Multiple Choice (OMC) and Multiple True or False (MTF) items. The followings are the main findings from this study.;The OMC, MTF and CR items might measure different components of the construct. A single construct explained most of the variance in students' performances. However, additional dimensions in terms of item format can explain certain amount of the variance in student performance. So additional dimensions need to be considered when we want to capture the differences in students' performances on different types of items targeting the understanding of the same underlying progression. Items in each item format need to be improved in certain ways to classify students more accurately into the learning progression levels.;This study establishes some general steps that can be followed to design other learning progression-based tests as well. For example, first, the boundaries between levels on the IRT scale can be defined by using the means of the item thresholds across a set of good items. Second, items in multiple formats can be selected to achieve the information criterion at all the defined boundaries. This ensures the accuracy of the classification. Third, when item threshold parameters vary a bit, the scoring rubrics and the items need to be reviewed to make the threshold parameters similar across items. This is because one important design criterion of the learning progression-based items is that ideally, a student should be at the same level across items, which means that the item threshold parameters (d1, d 2 and d3) should be similar across items.;To design a learning progression-based science assessment, we need to understand whether the assessment measures a single construct or several constructs and how items are associated with the constructs being measured. Results from dimension analyses indicate that items of different carbon transforming processes measure different aspects of the carbon cycle construct. However, items of different practices assess the same construct. In general, there are high correlations among different processes or practices. It is not clear whether the strong correlations are due to the inherent links among these process/practice dimensions or due to the fact that the student sample does not show much variation in these process/practice dimensions. Future data are needed to examine the dimensionalities in terms of process/practice in detail.;Finally, based on item characteristics analysis, recommendations are made to write more discriminative CR items and better OMC, MTF options. Item writers can follow these recommendations to write better learning progression-based items.
机译:学习进度用于描述学生对主题的理解随着时间的推移如何发展,以及将学生的进步分为步骤或级别。这项研究应用基于项目响应理论(IRT)的方法来研究如何设计基于学习进度的科学评估。本研究的研究问题是:(1)如何使用不同格式的项目将学生分类为学习进度的等级;(2)如何设计测验以通过学习过程中的学习进度提供有关学生进度的良好信息(3)测试项目的哪些特征支持其用于评估学生的水平。;本研究使用的数据来自2009--2010年期间的1500名中小学生。书面评估以多种格式开发,例如构造响应(CR)项,有序多项选择(OMC)和对错(MTF)项。以下是本研究的主要发现。OMC,MTF和CR项目可能会测量构建体的不同组成部分。一个单一的结构可以解释学生表现的大部分差异。但是,就项目格式而言,其他维度可以解释学生成绩的某些差异。因此,当我们想捕捉学生在不同类型项目上的表现差异时,还需要考虑其他方面,以理解相同的潜在进展。需要以某种方式改进每种项目格式的项目,以将学生更准确地分类到学习进度级别中。本研究建立了一些通用步骤,也可以遵循这些一般步骤来设计其他基于学习进度的测试。例如,首先,可以使用一组好的商品上的商品阈值来定义IRT级别上各个级别之间的边界。其次,可以选择多种格式的项目以在所有定义的边界处达到信息标准。这样可以确保分类的准确性。第三,当项目阈值参数略有变化时,需要查看评分标准和项目,以使阈值参数在各个项目之间相似。这是因为基于学习进度的项目的一项重要设计标准是,理想情况下,学生在项目之间应处于同一水平,这意味着项目阈值参数(d1,d 2和d3)在项目之间应相似。 ;要设计基于学习进度的科学评估,我们需要了解评估是衡量单个结构还是多个结构,以及项目与被测结构如何关联。尺寸分析的结果表明,不同碳转化过程的项目可测量碳循环构造的不同方面。但是,不同实践的项目会评估同一构造。通常,不同过程或实践之间存在高度相关性。尚不清楚强相关性是由于这些过程/实践维度之间的固有联系还是由于学生样本在这些过程/实践维度上没有表现出很大的差异。需要将来的数据来详细检查过程/实践方面的维度。最后,基于项目特征分析,建议编写更具区分性的CR项目以及更好的OMC,MTF选项。项目作者可以遵循这些建议来编写更好的基于学习进度的项目。

著录项

  • 作者

    Chen, Jing.;

  • 作者单位

    Michigan State University.;

  • 授予单位 Michigan State University.;
  • 学科 Education Tests and Measurements.;Education Sciences.
  • 学位 Ph.D.
  • 年度 2012
  • 页码 165 p.
  • 总页数 165
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号