首页> 美国政府科技报告 >NAEP Validity Studies: A Study of Equating in NAEP
【24h】

NAEP Validity Studies: A Study of Equating in NAEP

机译:NaEp有效性研究:NaEp等同研究

获取原文

摘要

This study investigates the amount of uncertainty added to NAEP estimates by equating error under both ideal and less than ideal circumstances. For example, circumstances led to a situation in which the 1994 to 1992 reading assessment equating had to be based on a set of common items that was both smaller, and more heavily weighted toward multiple choice, than anticipated. If performance on the two types of items does not change at the same rate over time, such equatings might introduce systematic bias in trends measured from equated scores. Data from past administrations are used to guide simulations of various (better and worse) equating designs, and error due to equating is estimated empirically. The design includes a variety of factors that might affect accuracy of equating, with the levels of each factor based roughly on operational values in the NAEP 1992 and 1994 reading and 1992 mathematics assessments. The purpose is to estimate the approximate additional uncertainty that might be introduced by equating from one assessment wave to the next, and to determine what factors in the equating design contribute most to that uncertainty. The specific factors investigated were number of items in the scale, the proportion of items in the scale taken by each student, the proportion of items in each administration which are common, the proportion of each item type in each scale, the proportion of each item type among common items used for equating, the scale linking strategy (IRT invariance, common item, or multiple group IRT linking), and the change in ability from wave 1 to wave 2.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号