首页>
外文期刊>The Journal of Graduate Medical Education
>The Relationship Between Faculty Performance Assessment and Results on the In-Training Examination for Residents in an Emergency Medicine Training Program
【24h】
The Relationship Between Faculty Performance Assessment and Results on the In-Training Examination for Residents in an Emergency Medicine Training Program
What was known Residents' medical knowledge is commonly assessed by the in-training examination (ITE) and faculty evaluations of resident performance.;What is new Faculty evaluations and ITE scores increase with residents' postgraduate year level, and are moderately correlated.;Limitations Single-program study and small sample size may limit generalizability.;Bottom line Faculty assessment of resident medical knowledge may represent a construct that is distinct and separate from the “medical knowledge” assessed by the ITE.;Introduction Faculty assessment of clinical performance is a frequently used method for assessing competencies and is required by nearly all Residency Review Committees.1 Most programs also administer annual, in-training examinations (ITEs), designed to measure each resident's medical knowledge (MK). Despite the nearly universal use of these 2 methods, little research has been done to assess the relationship of the data derived from these different methods of evaluation. The ITEs have been shown to predict resident performance on future specialty certifying examinations,2 yet the literature shows a poor correlation between ITE scores and resident clinical performance.3–8 To date, no study assessing the relationship between ITE performance and faculty evaluations has been documented to be reliable. Because emergency medicine (EM) residents routinely work closely with several different attending evaluators, they offered a unique opportunity to assess the interobserver and overall reliability of their clinical evaluations. If faculty evaluations prove reliable, but yield results divergent from the ITE results, the likely reason is that the 2 evaluation methods measure different constructs. The goal of this investigation was to assess the reliability of faculty evaluations and to determine the relationship between faculty's assessment of resident performance and residents' ITE scores. In addition, we planned to determine whether those relationships changed when that data were stratified by postgraduate year (PGY) level.;Results During the 6-year study period, 51 faculty members completed 1912 evaluations on 59 residents. The data set included 140 composite, third-quarter evaluations, with most residents having evaluations for multiple years of training. A mean of 13.7 (SD?±?2.9) faculty members evaluated each resident during that period. There were 12 circumstances in which the ITE scores were not available, leaving 128 complete sets of resident observations for data analysis. No residents repeated any year of training during the study period. The random-effects, intraclass correlation analysis revealed that the faculty evaluation process was highly reliable (MK mean κ??=??0.99 and OC mean κ??=??0.99). We also grouped the residents by PGY level and repeated the analysis to remove any potential evaluator bias leading to falsely elevated reliabilities from evaluator knowledge of the residents' year of training. That analysis again revealed high reliabilities for both MK (PGY-1 mean κ??=??0.68; PGY-2 mean κ??=??0.76; PGY-3 mean κ??=??0.84) and OC factors (PGY-1 mean κ??=??0.70; PGY-2 mean κ??=??0.73; PGY-3 mean κ??=??0.81). The mean scores for the ITE, MK, and OC increased significantly with year of training (table 1). The ITE scores had more overlap across year of training than did the MK assessed by faculty evaluations (figures 1 and 2). When correlation analyses were performed across all PGY levels, MK and OC had very high correlations with PGY level (MK r??=??0.97, P?.001; OC r??=??0.97, P?.001), whereas the ITE score correlated moderately with PGY level (r??=??.60, P?.001). Assessment of the relationship between the ITE, MK, and OC scores showed that faculty assessments of MK correlated strongly with OC score (r??=??0.99, P?.001) and moderately with the ITE score (r??=??0.61, P?.001; table 2). View larger version (25K) FIGURE 1Change With Postgraduate Year (PGY) for Faculty Assessment of Medical Know
展开▼