...
首页> 外文期刊>The Journal of continuing education in the health professions >Standardizing evaluation of on-line continuing medical education: physician knowledge, attitudes, and reflection on practice.
【24h】

Standardizing evaluation of on-line continuing medical education: physician knowledge, attitudes, and reflection on practice.

机译:在线继续医学教育的标准化评估:医师的知识,态度和对实践的反思。

获取原文
获取原文并翻译 | 示例
           

摘要

INTRODUCTION: Physicians increasingly earn continuing medical education (CME) credits through on-line courses, but there have been few rigorous evaluations to determine their effects. The present study explores the feasibility of implementing standardized evaluation templates and tests them to evaluate 30 on-line CME courses. METHODS: A time series design was used to compare the knowledge, attitudes, and reported changes in practice of physician participants who completed any of 30 on-line CME courses that were hosted on an academic CME Web site and a CME Web portal during the period from August 1, 2002, through March 31, 2003. Data were collected at baseline, at course completion, and 4 weeks later Paired t tests were used to compare the means of responses across time. RESULTS: U.S. physicians completed 720 post-tests. Quality of content was the characteristic of most importance to participants; too little interaction was the largest source of dissatisfaction. Overall mean knowledge scores increased from 58.1% to 75.6% at post-test and then decreased to 68.2% at 4 weeks following the course. Effect sizes of increased knowledge immediately following the course were larger for case-based than for text-based courses. Nearly all physicians reported making changes in practice following course completion, although reported changes differed from expected changes. CONCLUSIONS: Increases in physician knowledge and knowledge retention were demonstrated following participation in on-line CME courses. The implementation of standardized evaluation tests proved to be feasible and allowed longitudinal evaluation analyses across CME providers and content areas.
机译:简介:医师越来越多地通过在线课程获得继续医学教育(CME)的学分,但很少有严格的评估来确定其效果。本研究探讨了实施标准化评估模板的可行性,并对其进行测试以评估30个在线CME课程。方法:采用时间序列设计来比较在此期间完成了30种在线CME课程的医师参与者的知识,态度和所报告的实践变化,这些课程在学术CME网站和CME网站门户上托管从2002年8月1日到2003年3月31日。在基线,课程结束时和4周后收集数据。配对t检验用于比较不同时间段的应答方式。结果:美国医师完成了720次后期测试。内容质量是参与者最重要的特征;互动太少是最大的不满根源。总体平均知识分数在测验后从58.1%增加到75.6%,然后在课程后4周下降到68.2%。对于基于案例的案例,紧随课程之后的知识增长的效果要比基于文本的课程大。尽管报告的变化与预期的变化有所不同,但是几乎所有的医生都报告了课程完成后实践中的变化。结论:参加在线CME课程后证明了医师知识和知识保留的增加。事实证明,实施标准化评估测试是可行的,并且可以对CME提供者和内容区域进行纵向评估分析。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号