【24h】

Interrater agreement in the evaluation of discrepant imaging findings with the Radpeer system

机译:使用Radpeer系统评估影像学差异时的评估者之间的一致性

获取原文
获取原文并翻译 | 示例
       

摘要

OBJECTIVE. The Radpeer system is central to the quality assurance process in many radiology practices. Previous studies have shown poor agreement between physicians in the evaluation of their peers. The purpose of this study was to assess the reliability of the Radpeer scoring system. MATERIALS AND METHODS. A sample of 25 discrepant cases was extracted from our quality assurance database. Images were made anonymous; associated reports and identities of interpreting radiologists were removed. Indications for the studies and descriptions of the discrepancies were provided. Twenty-one subspecialist attending radiologists rated the cases using the Radpeer scoring system. Multirater kappa statistics were used to assess interrater agreement, both with the standard scoring system and with dichotomized scores to reflect the practice of further review for cases rated 3 and 4. Subgroup analyses were conducted to assess subspecialist evaluation of cases. RESULTS. Interrater agreement was slight to fair compared with that expected by chance. For the group of 21 raters, the kappa values were 0.11 (95% CI, 0.06-0.16) with the standard scoring system and 0.20 (95% CI, 0.13-0.27) with dichotomized scores. There was disagreement about whether a discrepancy had occurred in 20 cases. Subgroup analyses did not reveal significant differences in the degree of interrater agreement. CONCLUSION. The identification of discrepant interpretations is valuable for the education of individual radiologists and for larger-scale quality assurance and quality improvement efforts. Our results show that a ratings-based peer review system is unreliable and subjective for the evaluation of discrepant interpretations. Resources should be devoted to developing more robust and objective assessment procedures, particularly those with clear quality improvement goals.
机译:目的。在许多放射学实践中,Radpeer系统对于质量保证过程至关重要。先前的研究表明,医师之间在对同伴的评估中存在不一致的观点。这项研究的目的是评估Radpeer评分系统的可靠性。材料和方法。从我们的质量保证数据库中抽取了25个差异案例的样本。图片被匿名化;相关的报告和口译放射科医生的身份已删除。提供了研究指示和差异说明。二十一名参加放射科医师的专科医师使用Radpeer评分系统对病例进行了评分。采用多评分者kappa统计量,通过标准评分系统和按二分法评分来评估族群一致性,以反映对3级和4级病例进行进一步审查的实践。进行了亚组分析,以评估专科医师对病例的评估。结果。与偶然发生的预期相比,评估者之间的协议微不足道。对于21个评分者组,在标准评分系统中,kappa值为0.11(95%CI,0.06-0.16),在二等分评分中,kappa值为0.20(95%CI,0.13-0.27)。对于20例是否发生差异存在分歧。亚组分析未发现间质同意程度的显着差异。结论。识别不同的解释对于个别放射科医生的教育以及大规模的质量保证和质量改进工作非常有价值。我们的结果表明,基于等级的同行评审系统对于评估差异解释不可靠且主观。应将资源用于开发更健壮和客观的评估程序,尤其是那些具有明确质量改进目标的程序。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号