...
首页> 外文期刊>The Journal of Graduate Medical Education >The CORD Standardized Letter of Evaluation: Have We Achieved Perfection or Just a Better Understanding of Our Limitations?
【24h】

The CORD Standardized Letter of Evaluation: Have We Achieved Perfection or Just a Better Understanding of Our Limitations?

机译:CORD标准化评估书:我们实现了完美还是只是对局限性有了更好的了解?

获取原文
   

获取外文期刊封面封底 >>

       

摘要

In the early 1990s, an emergency medicine (EM) program director remediated a resident for over a year, to no avail. The resident's contract was not renewed, and a recommendation made that the resident consider another specialty. When this decision was discussed with the department chair and clerkship director, who had written a very positive and “flowery” narrative letter of recommendation (NLOR), the chair said, “We knew this resident would struggle.” The “fluffed up” letter was a disservice to colleagues and to the resident, who spent a difficult year in over her head. At the time, the general discussion among EM program directors indicated that accurate information transfer was a common limitation of the NLOR. Often, NLORs included no objective data (not even the EM clerkship grade) and provided no global comparison to other students. It was perceived that, often, one could not get to a “bottom line” view of the candidate despite a lengthy letter that was time consuming to prepare. In an attempt to address the problems with NLORs, a council of EM residency directors (CORD) subcommittee developed a standardized letter of recommendation (SLOR) in 1995, which was initiated in 1997.1 The SLOR offered more objective data than the NLOR, including an evaluation of EM clerkship performance and a prediction by the writer of how their program might rank the student. The original SLOR included the following 4 sections: (A) background information (clerkship performance); (B) qualifications for EM (personal characteristics relating to the choice of EM); (C) global assessment (comparisons to other students); and (D) written comments. As discussed in a paper by Girzadas et al,2 it quickly became clear that the SLOR was easier to prepare and read with the new format. Whether it made information transfer more reliable was a separate question, and that led to research of potential SLOR limitations. That research, similar to what Girzadas et al2 had found, suggested some problems.3,4 Potential biases resulting in grade inflation of the SLOR were uncovered. Areas of concern included gender bias, inexperience by letter writers, and the duration of time the letter writer knew the applicant.3,4 Another paper by a CORD task force demonstrated evidence of SLOR “grade inflation,” as 40% of reviewed SLORs rated their applicants in the “top 10%,” and over 95% of these SLORs rated applicants in the top third.5 Finally, when rank lists were compared with the global assessment question regarding estimated rank list position, overestimation on the SLOR occurred 66% of the time.6 The SLOR is central to 2 papers in this issue of the Journal of Graduate Medical Education. The study by Diab et al7 shows that the SLOR, with its measurable categories, allows research into the application process. Diab et al7 demonstrated a significant increase in the global assessment ranking of “outstanding” in letters where applicants did not waive their Family Educational Rights and Privacy Act (FERPA) rights, suggesting that if a faculty member is aware that an applicant may read their SLOR, the grade may be inflated.7 Thankfully, 93% of applicants waived their FERPA rights. The study is limited in that we do not know whether applicants who did not waive their rights were representative of the whole population of applicants, but it does suggest one should consider the possibility of bias if no waiver is present. The paper by Hegarty et al8 describes the work of a CORD SLOR task force that was convened in 2011 to review the SLOR and determine whether improvements could be recommended. Although only 37% of the group surveyed had read the CORD guidelines in the previous year, these guidelines were very general and did not include specific recommendations for each question or even for each of the 4 sections. The consequence was great variability in how the question regarding “One Key Comment from ED Faculty Evaluations” was addressed in section A. The manner in which answers we
机译:在1990年代初期,急诊医学(EM)计划负责人对居民进行了一年以上的补救,但无济于事。居民的合同没有续签,并且建议居民考虑其他专业。当与部门主席兼业务总监讨论了这个决定时,主席写了一封非常积极和“花式”的叙事推荐信(NLOR),主席说:“我们知道这个居民会很努力。”这封“乱七八糟”的信给同事和当地居民造成了极大的伤害。当时,EM程序主管之间的一般性讨论表明,准确的信息传递是NLOR的普遍限制。 NLOR通常不包含客观数据(甚至不包括EM职员级别),也无法与其他学生进行全球比较。人们认为,尽管写了一封冗长的信很费时间,但通常还是无法获得候选人的“底线”。为了解决NLOR的问题,EM驻地董事理事会(CORD)小组委员会于1995年制定了标准推荐信(SLOR),该推荐信于1997年启动。1SLOR比NLOR提供了更为客观的数据,包括评估EM职员绩效的评估,以及作者对他们的课程对学生的排名的预测。原始的SLOR包括以下4个部分:(A)背景信息(业务绩效); (B)EM资格(与EM选择有关的个人特征); (C)全球评估(与其他学生的比较); (D)书面评论。正如Girzadas等人[2]所讨论的那样,很快就清楚了SLOR易于使用新格式进行准备和阅读。是否使信息传输更可靠是一个单独的问题,这导致了对潜在SLOR限制的研究。这项研究与Girzadas等[2]相似,也提出了一些问题。3,4未发现导致SLOR等级膨胀的潜在偏差。关注的领域包括性别偏见,信函作者缺乏经验以及信函作者认识申请人的时间。3,4CORD工作组的另一篇论文证明了SLOR的“等级通货膨胀”证据,有40%的经过审查的SLOR被评为排名在前10%的申请者中,超过95%的SLOR在申请者中排名前三。5最后,当将排名列表与有关排名排名排名的全球评估问题进行比较时,对SLOR的高估发生了66% 6 SLOR是本期《研究生医学教育杂志》上2篇论文的核心。 Diab等[7]的研究表明,SLOR及其可衡量的类别允许对申请流程进行研究。 Diab等人7证明,如果申请人没有放弃其《家庭教育权利和隐私权法案》(FERPA)的权利,其信件中的“杰出”的全球评估排名将大大提高,这表明如果教师知道申请人可以阅读其SLOR, 7幸运的是,93%的申请人放弃了他们的FERPA权利。这项研究的局限性在于,我们不知道没有放弃权利的申请人是否代表了全部申请人,但是它确实建议人们应该考虑如果没有放弃的话有偏见的可能性。 Hegarty等人的论文[8]描述了CORD SLOR工作队的工作,该工作队于2011年召集以审查SLOR并确定是否可以提出改进建议。尽管接受调查的人群中只有37%在上一年阅读了CORD指南,但这些指南非常笼统,没有针对每个问题甚至四个部分中的每一个都提供具体建议。结果是,在A节中如何解决有关“教育学院评估的一个关键评论”的问题,存在很大的差异性。我们回答问题的方式

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号