首页> 外文期刊>Medical education >Sources of variation in performance on a shared OSCE station across four UK medical schools.
【24h】

Sources of variation in performance on a shared OSCE station across four UK medical schools.

机译:英国四所医学院校在共享的OSCE工作站上表现差异的原因。

获取原文
获取原文并翻译 | 示例
       

摘要

CONTEXT: High-stakes undergraduate clinical assessments should be based on transparent standards comparable between different medical schools. However, simply sharing questions and pass marks may not ensure comparable standards and judgements. We hypothesised that in multicentre examinations, teaching institutions contribute to systematic variations in students' marks between different medical schools through the behaviour of their markers, standard-setters and simulated patients. METHODS: We embedded a common objective structured clinical examination (OSCE) station in four UK medical schools. All students were examined by a locally trained examiner as well as by a centrally provided examiner. Central and local examiners did not confer. Pass scores were calculated using the borderline groups method. Mean scores awarded by each examiner group were also compared. Systematic variations in scoring between schools and between local and central examiners were analysed. RESULTS: Pass scores varied slightly but significantly between each school, and between local and central examiners. The patterns of variation were usually systematic between local and central examiners (either consistently lower or higher). In some cases scores given by one examiner pair were significantly different from those awarded by other pairs in the same school, implying that other factors (possibly simulated patient behaviour) make a significant difference to student scoring. CONCLUSIONS: Shared undergraduate clinical assessments should not rely on scoring systems and standard setting which fail to take into account other differences between schools. Examiner behaviour and training and other local factors are important contributors to variations in scores between schools. The OSCE scores of students from different medical schools should not be directly compared without taking such systematic variations into consideration.
机译:背景:高水平的本科生临床评估应基于不同医学院之间可比较的透明标准。但是,仅分享问题和通过分数可能无法确保可比的标准和判断。我们假设,在多中心考试中,教学机构通过其标记,标准制定者和模拟患者的行为来促进不同医学院校学生标记的系统变化。方法:我们在英国的四所医学院校中建立了一个通用的客观结构化临床检查(OSCE)站。所有学生均由受过本地培训的考试员以及由中央提供的考试员进行了考试。中央和地方审查员没有同意。通过分界线法计算通过分数。还比较了每个考官组授予的平均分数。分析了学校之间以及地方和中央考官之间评分的系统差异。结果:每所学校之间以及本地和中央考官之间的及格分数略有差异,但差异很大。变异的模式通常在本地和中央审查员之间是系统的(始终较低或较高)。在某些情况下,一对考试者给的分数与同一所学校的其他考试者给的分数显着不同,这表明其他因素(可能是模拟的患者行为)对学生得分有显着影响。结论:共享的本科生临床评估不应依赖评分系统和标准制定,而评分系统和标准制定不能考虑学校之间的其他差异。考官的行为和培训以及其他本地因素是造成学校之间分数差异的重要因素。如果不考虑系统差异,就不能直接比较来自不同医学院校学生的OSCE分数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号