...
首页> 外文期刊>The Journal of Graduate Medical Education >The Relationship Between Faculty Performance Assessment and Results on the In-Training Examination for Residents in an Emergency Medicine Training Program
【24h】

The Relationship Between Faculty Performance Assessment and Results on the In-Training Examination for Residents in an Emergency Medicine Training Program

机译:急诊医学培训课程中教师绩效评估与住院医师培训考试成绩之间的关系

获取原文
           

摘要

What was known Residents' medical knowledge is commonly assessed by the in-training examination (ITE) and faculty evaluations of resident performance.;What is new Faculty evaluations and ITE scores increase with residents' postgraduate year level, and are moderately correlated.;Limitations Single-program study and small sample size may limit generalizability.;Bottom line Faculty assessment of resident medical knowledge may represent a construct that is distinct and separate from the “medical knowledge” assessed by the ITE.;Introduction Faculty assessment of clinical performance is a frequently used method for assessing competencies and is required by nearly all Residency Review Committees.1 Most programs also administer annual, in-training examinations (ITEs), designed to measure each resident's medical knowledge (MK). Despite the nearly universal use of these 2 methods, little research has been done to assess the relationship of the data derived from these different methods of evaluation. The ITEs have been shown to predict resident performance on future specialty certifying examinations,2 yet the literature shows a poor correlation between ITE scores and resident clinical performance.3–8 To date, no study assessing the relationship between ITE performance and faculty evaluations has been documented to be reliable. Because emergency medicine (EM) residents routinely work closely with several different attending evaluators, they offered a unique opportunity to assess the interobserver and overall reliability of their clinical evaluations. If faculty evaluations prove reliable, but yield results divergent from the ITE results, the likely reason is that the 2 evaluation methods measure different constructs. The goal of this investigation was to assess the reliability of faculty evaluations and to determine the relationship between faculty's assessment of resident performance and residents' ITE scores. In addition, we planned to determine whether those relationships changed when that data were stratified by postgraduate year (PGY) level.;Results During the 6-year study period, 51 faculty members completed 1912 evaluations on 59 residents. The data set included 140 composite, third-quarter evaluations, with most residents having evaluations for multiple years of training. A mean of 13.7 (SD?±?2.9) faculty members evaluated each resident during that period. There were 12 circumstances in which the ITE scores were not available, leaving 128 complete sets of resident observations for data analysis. No residents repeated any year of training during the study period. The random-effects, intraclass correlation analysis revealed that the faculty evaluation process was highly reliable (MK mean κ??=??0.99 and OC mean κ??=??0.99). We also grouped the residents by PGY level and repeated the analysis to remove any potential evaluator bias leading to falsely elevated reliabilities from evaluator knowledge of the residents' year of training. That analysis again revealed high reliabilities for both MK (PGY-1 mean κ??=??0.68; PGY-2 mean κ??=??0.76; PGY-3 mean κ??=??0.84) and OC factors (PGY-1 mean κ??=??0.70; PGY-2 mean κ??=??0.73; PGY-3 mean κ??=??0.81). The mean scores for the ITE, MK, and OC increased significantly with year of training (table 1). The ITE scores had more overlap across year of training than did the MK assessed by faculty evaluations (figures 1 and 2). When correlation analyses were performed across all PGY levels, MK and OC had very high correlations with PGY level (MK r??=??0.97, P?
机译:居民的医学知识通常通过入学考试(ITE)和教员对居民表现的评估来评估。什么是新的教员评估和ITE分数随居民研究生年级的提高而增加,并且具有中等相关性。单项研究和小样本量可能会限制推广性。底线住院医师对医学知识的评估可能代表与ITE评估的“医学知识”截然不同的结构。引言临床表现的评估是一项几乎所有居留权审查委员会都要求使用这种常用的评估能力的方法。1大多数计划还管理年度培训中考试(ITE),旨在衡量每个居民的医学知识(MK)。尽管这两种方法几乎被普遍使用,但很少有研究评估这些不同评估方法得出的数据之间的关系。研究表明,ITEs可以预测未来专业认证考试中的住院医师表现,但文献显示ITE分数与住院医师临床表现之间的相关性很差。3-8迄今为止,尚无评估ITE绩效与教师评估之间关系的研究。有据可查。由于急诊医学(EM)居民通常会与几名不同的主审评估员密切合作,因此他们提供了独特的机会来评估其临床评估的观察者间和总体可靠性。如果教师评估证明是可靠的,但产出结果与ITE结果有所不同,则可能的原因是这两种评估方法测量的是不同的结构。这项调查的目的是评估教师评估的可靠性,并确定教师对居民绩效的评估与居民ITE分数之间的关系。此外,我们计划确定按研究生年份(PGY)对数据进行分层时,这些关系是否发生了变化。结果在6年的研究期内,共有51位教职员工完成了对59位居民的1912年评估。数据集包括140项第三季度综合评估,大多数居民都接受了多年培训的评估。在此期间,平均有13.7(SD?±?2.9)的教师对每位居民进行了评估。在12种情况下,没有ITE评分,留下了128套完整的居民观察数据用于数据分析。在研究期间,没有居民重复任何一年的培训。随机效应,组内相关分析表明,教师评估过程是高度可靠的(MK平均κ?? = 0.99,OC平均κ?? = 0.99)。我们还根据PGY级别对居民进行了分组,并重复了分析,以从评估者对居民培训年限的了解中消除任何潜在的评估者偏见,从而导致可靠性错误提高。该分析再次表明,MK和OC因子(PGY-1平均κα= ?? 0.68; PGY-2平均κα= ?? 0.76; PGY-3平均κα= ?? 0.84)和OC因子均具有较高的可靠性( PGY-1的平均κδ=α= 0.70; PGY-2的平均κδ=α= 0.73; PGY-3的平均κδ=α= 0.81)。 ITE,MK和OC的平均分数随培训年限的增加而显着增加(表1)。与通过教师评估评估的MK相比,整个培训年度的ITE分数有更多的重叠(图1和图2)。当对所有PGY水平进行相关性分析时,MK和OC与PGY水平具有非常高的相关性(MK r25 = 0.97,P <0.001.001; OC r25 = 0.97,P <0.001。 (.001),而ITE得分与PGY水平适度相关(r ?? = ?. 60,P?

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号