首页> 美国卫生研究院文献>Educational and Psychological Measurement >Using Differential Item Functioning to Test for InterraterReliability in Constructed Response Items
【2h】

Using Differential Item Functioning to Test for InterraterReliability in Constructed Response Items

机译:使用功能的差分项目来测试Interrater构造响应项的可靠性

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The purpose of this study was to investigate a new way of evaluating interraterreliability that can allow one to determine if two raters differ with respect totheir rating on a polytomous rating scale or constructed response item.Specifically, differential item functioning (DIF) analyses were used to assessinterrater reliability and compared with traditional interrater reliabilitymeasures. Three different procedures that can be used as measures of interraterreliability were compared: (1) intraclass correlation coefficient (ICC), (2)Cohen’s kappa statistic, and (3) DIF statistic obtained from Poly-SIBTEST. Theresults of this investigation indicated that DIF procedures appear to be apromising alternative to assess the interrater reliability of constructedresponse items, or other polytomous types of items, such as rating scales.Furthermore, using DIF to assess interrater reliability does not require a fullycrossed design and allows one to determine if a rater is either more severe, ormore lenient, in their scoring of each individual polytomous item on a test orrating scale.
机译:本研究的目的是调查评估Interrade的新方式可靠性可以允许一个人确定两个评估者是否不同它们对多元评级规模或构造响应项的评级。具体地,使用差分项目功能(DIF)分析来评估Interrayer可靠性和与传统的Interray可靠性相比措施。三种不同的程序可用作Interrater的措施比较可靠性:(1)内部相关系数(ICC),(2)科恩的喀布巴统计数据,(3)从Poly-Sibtest获得的差异。这该调查的结果表明,DIF程序似乎是一个有前途的替代方案来评估建造的Interrader Actionive响应项或其他多种多种物品,例如评级尺度。此外,使用DIF评估Interray可靠性不需要完全交叉设计并允许一个人确定Rater是否更严重,或者更宽松,在他们对测试中的每个单独多种物品的得分或评定量表。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号