首页> 外文期刊>The American Journal of Surgery >Faculty and resident evaluations of medical students on a surgery clerkship correlate poorly with standardized exam scores
【24h】

Faculty and resident evaluations of medical students on a surgery clerkship correlate poorly with standardized exam scores

机译:医学生对外科文职人员的教职员工和住院医师评估与标准化考试成绩的相关性很差

获取原文
获取原文并翻译 | 示例
           

摘要

Background The clinical knowledge of medical students on a surgery clerkship is routinely assessed via subjective evaluations from faculty members and residents. Interpretation of these ratings should ideally be valid and reliable. However, prior literature has questioned the correlation between subjective and objective components when assessing students' clinical knowledge. Methods Retrospective cross-sectional data were collected from medical student records at The Johns Hopkins University School of Medicine from July 2009 through June 2011. Surgical faculty members and residents rated students' clinical knowledge on a 5-point, Likert-type scale. Interrater reliability was assessed using intraclass correlation coefficients for students with ≥4 attending surgeon evaluations (n = 216) and ≥4 resident evaluations (n = 207). Convergent validity was assessed by correlating average evaluation ratings with scores on the National Board of Medical Examiners (NBME) clinical subject examination for surgery. Average resident and attending surgeon ratings were also compared by NBME quartile using analysis of variance. Results There were high degrees of reliability for resident ratings (intraclass correlation coefficient,.81) and attending surgeon ratings (intraclass correlation coefficient,.76). Resident and attending surgeon ratings shared a moderate degree of variance (19%). However, average resident ratings and average attending surgeon ratings shared a small degree of variance with NBME surgery examination scores (ρ2 ≤.09). When ratings were compared among NBME quartile groups, the only significant difference was for residents' ratings of students with the lower 25th percentile of scores compared with the top 25th percentile of scores (P =.007). Conclusions Although high interrater reliability suggests that attending surgeons and residents rate students with consistency, the lack of convergent validity suggests that these ratings may not be reflective of actual clinical knowledge. Both faculty members and residents may benefit from training in knowledge assessment, which will likely increase opportunities to recognize deficiencies and make student evaluation a more valuable tool.
机译:背景技术医学生对外科手术的临床知识通常通过教师和居民的主观评估来评估。理想情况下,这些等级的解释应有效且可靠。但是,现有文献对评估学生的临床知识时的主观和客观成分之间的相关性提出了质疑。方法回顾性横断面数据来自约翰霍普金斯大学医学院的医学学生记录,时间为2009年7月至2011年6月。外科医师和住院医师以5点Likert型量表对学生的临床知识进行评分。使用班级内部相关系数评估≥4名主治医师评估(n = 216)和≥4名住院医师评估(n = 207)的学生的评估者间信度。通过将平均评估等级与国家医学检查委员会(NBME)的临床手术主题考试成绩相关联,评估收敛性有效性。 NBME四分位数还使用方差分析比较了居民和主治医师的平均评分。结果住院医师评分(类内相关系数,0.81)和主治医师评分(类内相关系数,0.7)具有高度的可靠性。住院医师和主治医师的评分有一定程度的差异(19%)。但是,平均住院医师评分和主治医师平均评分与NBME手术检查分数存在较小的差异(ρ2≤.09)。当比较NBME四分位数组的评分时,唯一显着的差异是分数较低的25%百分数最高的25%百分数的学生的居民评分(P = .007)。结论尽管高度的人间信度表明主治医师和住院医师对学生的评价一致,但缺乏收敛效度表明这些评价可能不能反映实际的临床知识。教师和居民都可以从知识评估培训中受益,这可能会增加发现缺陷的机会,并使学生评估成为更有价值的工具。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号