首页> 外文OA文献 >NEPS technical report for science: Scaling results of starting cohort 4 in 11th grade
【2h】

NEPS technical report for science: Scaling results of starting cohort 4 in 11th grade

机译:NEPS科学技术报告:11年级第4组起点的缩放结果

摘要

The National Educational Panel Study (NEPS) aims at investigating the development of competences across the whole life span and designs tests for assessing these different competence domains. In order to evaluate the quality of the competence tests, a wide range of analyses based on item response theory (IRT) have been performed. This paper describes the data on scientific literacy for starting cohort 4 in grade 11. Besides presenting descriptive statistics for the data, the scaling model applied to estimate competence scores and analyses performed to investigate the quality of the scale as well as the results of these analyses are also explained. The science test in grade 11 originally consisted of 29 multiple choice and complex multiple choice items and covers two knowledge domains as well as three different contexts. Five items had to be removed due to insufficient item quality. The test was administered to 4,417 students. A partial credit model was used for scaling the data. Item fit statistics, differential item functioning, Rasch-homogeneity, and the test’s dimensionality were evaluated to ensure the quality of the test. The results of the remaining test items illustrate good item fit values and measurement invariance across various subgroups. Moreover, the test showed a moderate reliability. The data shows that the assumption of unidimensionality of scientific literacy measured by this test seems adequate. Among the challenges of this test is the lack of very easy items. But overall, the results emphasize the good psychometric properties of the science test, thus supporting the estimation of reliable scientific literacy scores. In this paper, the data available in the Scientific Use File are described and the ConQuest-Syntax for scaling the data is provided. (IPN/Orig.)
机译:全国教育小组研究(NEPS)旨在调查整个寿命过程中能力的发展,并设计用于评估这些不同能力领域的测试。为了评估能力测试的质量,基于项目响应理论(IRT)进行了广泛的分析。本文介绍了从11年级开始的第4项队列的科学素养数据。除了提供数据描述性统计数据之外,该缩放模型还用于评估能力得分,并且进行了分析以调查量表的质量以及这些分析的结果。也作了解释。 11年级的科学考试最初由29个多项选择题和复杂的多项选择题组成,涵盖了两个知识领域以及三个不同的环境。由于项目质量不足,必须删除五项。该考试已向4,417名学生进行。使用部分信用模型来缩放数据。评估了项目适合度统计信息,差异项功能,Rasch均匀性和测试的维度,以确保测试的质量。其余测试项目的结果说明了良好的项目拟合值和各个子组之间的测量不变性。而且,测试显示出中等的可靠性。数据表明,以此检验测得的科学素养的一维性假设似乎是足够的。该测试的挑战之一是缺少非常简单的物品。但是总的来说,这些结果强调了科学测验的良好心理计量学特性,从而支持了可靠的科学素养分数的估算。在本文中,描述了“科学使用”文件中的可用数据,并提供了用于缩放数据的ConQuest语法。 (IPN /来源)

著录项

  • 作者

    Hahn Inga; Kähler Jana;

  • 作者单位
  • 年度 2016
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号