...
首页> 外文期刊>The Journal of Graduate Medical Education >From Structured, Standardized Assessment to Unstructured Assessment in the Workplace
【24h】

From Structured, Standardized Assessment to Unstructured Assessment in the Workplace

机译:从工作场所的结构化,标准化评估到非结构化评估

获取原文
   

获取外文期刊封面封底 >>

       

摘要

Real-time patient evaluation of physicians' communication and professional skills is a wonderful idea. Wasn't it Aristotle who said that the guests will be a better judge of the feast than the cook? But if such evaluations are so logical, why aren't patient evaluations an integral part of all assessment programs? While reading the article by Dine et al1 in the current issue of the Journal of Graduate Medical Education, I was reminded of a silly question I asked during my younger years in medical education: “Why do we insist on structuring and standardizing our assessments, and training our faculty to assess the competence of students to perform in unstandardized contexts for untrained bosses and with untrained patients?” My colleagues replied that tests must be fair and equitable, and therefore need to be structured and standardized to produce reproducible results. Of course I agreed with them because how can one disagree with the notion that tests must be fair? Dine et al,1 who study patient assessment of internal medicine resident skills, acknowledge that is it not possible to obtain standardization or structuring in all facets of an assessment program, yet these very aspects are important. As often occurs when patient assessments are used, no attempts were made to train the patients providing the assessments; instead their “raw” information was collected and used. There is good value in this approach; after all, it is patients who decide whether they trust and respect their physician, and it is patients who decide on physicians' communication skills and whether they are treated professionally and with respect. If an assessment cannot be standardized or structured, how can we ensure that the assessment is of high quality and fair? This question is critical because, generally, our approach to assessment quality is based on the notion of a construct. In this context, a construct is a human characteristic that cannot be observed directly, but has to be inferred from things we can observe. A typical example of a construct in medicine is blood pressure. It cannot be observed directly but rather is inferred from reading the sphygmomanometer while lowering the cuff pressure and listening to the Korotkow tones with a stethoscope. In medical education assessment, we use constructs such as knowledge, skills, and professionalism. In their seminal 1955 article, Cronbach and Meehl2 shaped our thinking on construct validation. Part of their view is that an individual item of a test is not valid, per se; it is the total scores emanating from the aggregation of the performances on all items that make the test valid. With multiple-choice questions this concept is easy to understand: a single item does not tell us much about a candidate's competence. However, it is plausible that a larger set of item responses will provide more accurate information. This is why standardized tests typically contain large numbers of items. This principle is adhered to in many tests, and it is why in an objective structured clinical examination (OSCE) we add the performance on a chest examination station to those on resuscitation and communication stations to form a total score for “skills,” despite the counterintuitive nature of such an approach. Yet, collecting assessment information from patients, as performed by Dine et al,1 is fundamentally different from the assessment modalities described above. One may question whether a construct-based approach is the most appropriate in this context or whether we need different approaches to determining the quality of patient-based assessments. The first point to consider when comparing patient-based assessment to the more established assessments described above concerns a fundamental assumption in the construct approach. During a multiple-choice test, and even during an OSCE, it is a reasonable and necessary assumption that the object of measurement (ie, the student or resident) does not change. This is an important a
机译:对医生的沟通和专业技能进行实时患者评估是一个绝妙的主意。难道亚里斯多德说过,客人比厨师更好地判断the席吗?但是,如果这样的评估非常合乎逻辑,为什么患者评估不成为所有评估计划的组成部分?在阅读本期《研究生医学教育杂志》上Dine等人的文章时,让我想起了我在医学教育的幼年时代问的一个愚蠢的问题:“为什么我们坚持构造和标准化评估,以及对我们的教师进行培训,以评估学生在未经规范的情况下为未经培训的老板和未经培训的患者表演的能力吗?”我的同事回答说,测试必须公平,公正,因此需要进行结构化和标准化以产生可重复的结果。我当然同意他们的观点,因为人们如何不同意测试必须公平的观点?研究患者对内科医师住院医师技能的评估的Dine等人1承认,不可能在评估计划的所有方面获得标准化或结构化,但是这些方面非常重要。当使用患者评估时经常发生,没有尝试训练提供评估的患者。而是收集并使用了他们的“原始”信息。这种方法具有很好的价值。毕竟,由患者决定是否信任和尊重医师,由患者决定医师的沟通技巧以及是否接受专业和尊重对待。如果评估无法标准化或结构化,我们如何确保评估的质量和公正性?这个问题至关重要,因为通常来说,我们评估质量的方法是基于构造的概念。在这种情况下,构造是人类的特征,无法直接观察到,但必须从我们可以观察到的事物中推断出来。药物构建的典型例子是血压。它不能直接观察到,而是从降低血压的同时降低袖带压力并用听诊器听科氏音来推断的。在医学教育评估中,我们使用诸如知识,技能和专业精神之类的结构。在1955年的开创性文章中,Cronbach和Meehl2塑造了我们关于构造验证的思想。他们的部分观点是,单个测试项目本身是无效的。这是使所有测试有效的所有项目的表现汇总所产生的总分。对于选择题,这个概念很容易理解:单个项目并不能告诉我们有关候选人能力的很多信息。但是,可能会有更多的项目回应会提供更准确的信息。这就是为什么标准化测试通常包含大量项目的原因。这项原则在许多测试中都得到遵守,这就是为什么在客观结构化临床检查(OSCE)中,我们将胸部检查站的表现与复苏和通讯站的表现相加,以得出“技能”总分,尽管这种方法违反直觉。但是,如Dine等[1]所收集的来自患者的评估信息与上述评估方式根本不同。有人可能会质疑在这种情况下基于结构的方法是否最合适,或者我们是否需要不同的方法来确定基于患者的评估的质量。将基于患者的评估与上述更成熟的评估进行比较时,要考虑的第一点是构建方法中的基本假设。在多项选择测验中,甚至在OSCE测验中,合理的和必要的假设是测量对象(即学生或居民)不变。这是重要的

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号