首页> 外文期刊>Foot and ankle international >Interobserver and intraobserver reliability of two classification systems for intra-articular calcaneal fractures.
【24h】

Interobserver and intraobserver reliability of two classification systems for intra-articular calcaneal fractures.

机译:关节内跟骨骨折的两种分类系统的观察者间和观察者内可靠性。

获取原文
获取原文并翻译 | 示例
           

摘要

BACKGROUND: For a fracture classification to be useful it must provide prognostic significance, interobserver reliability, and intraobserver reproducibility. Most studies have found reliability and reproducibility to be poor for fracture classification schemes. The purpose of this study was to evaluate the interobserver and intraobserver reliability of the Sanders and Crosby-Fitzgibbons classification systems, two commonly used methods for classifying intra-articular calcaneal fractures. METHODS: Twenty-five CT scans of intra-articular calcaneal fractures occurring at one trauma center were reviewed. The CT images were presented to eight observers (two orthopaedic surgery chief residents, two foot and ankle fellows, two fellowship-trained orthopaedic trauma surgeons, and two fellowship-trained foot and ankle surgeons) on two separate occasions 8 weeks apart. On each viewing, observers were asked to classify the fractures according to both the Sanders and Crosby-Fitzgibbons systems. Interobserver reliability and intraobserver reproducibility were assessed with computer-generated kappa statistics (SAS software; SAS Institute Inc., Cary, North Carolina). RESULTS: Total unanimity (eight of eight observers assigned the same fracture classification) was achieved only 24% (six of 25) of the time with the Sanders system and 36% (nine of 25) of the time with the Crosby-Fitzgibbons scheme. Interobserver reliability for the Sanders classification method reached a moderate (kappa = 0.48, 0.50) level of agreement, when the subclasses were included. The agreement level increased but remained in the moderate (kappa = 0.55, 0.55) range when the subclasses were excluded. Interobserver agreement reached a substantial (kappa = 0.63, 0.63) level with the Crosby-Fitzgibbons system. Intraobserver reproducibility was better for both schemes. The Sanders system with subclasses included reached moderate (kappa = 0.57) agreement, while ignoring the subclasses brought agreement into the substantial (kappa = 0.77) range. The overall intraobserver agreement was substantial (kappa = 0.74) for the Crosby-Fitzgibbons system. CONCLUSIONS: Although intraobserver kappa values reached substantial levels and the Crosby-Fitzgibbons system generally showed greater agreement, we were unable to demonstrate excellent interobserver or intraobserver reliability with either classification scheme. While a system with perfect agreement would be impossible, our results indicate that these classifications lack the reproducibility to be considered ideal.
机译:背景:为了使骨折分类有用,它必须提供预后意义,观察者间的可靠性和观察者内的可重复性。大多数研究发现,对于骨折分类方案,可靠性和可重复性很差。这项研究的目的是评估Sanders和Crosby-Fitzgibbons分类系统的观察者之间和观察者内部的可靠性,这是对关节内跟骨骨折进行分类的两种常用方法。方法:回顾了一个创伤中心发生的跟骨关节内骨折的二十五个CT扫描。分别在8周的两个不同时间将CT图像提供给八名观察员(两名骨科外科主要住院医师,两名脚踝和踝关节研究员,两名骨科经过培训的骨科创伤外科医师和两名骨科经过培训的足踝外科医师)。每次观察时,都要求观察者根据Sanders和Crosby-Fitzgibbons系统对骨折进行分类。观察者之间的可靠性和观察者内部的可重复性通过计算机生成的kappa统计数据(SAS软件; SAS Institute Inc.,北卡罗来纳州卡里)进行了评估。结果:使用Sanders系统仅达到24%(25位中的六位)的完全一致(八名观察者具有相同的骨折分类),而使用Crosby-Fitzgibbons方案仅达到36%(25位中的九位)的时间。当包括子类时,Sanders分类方法的观察者间可靠性达到了中等(kappa = 0.48,0.50)一致性。当排除子类时,协议水平增加但保持在中等(kappa = 0.55,0.55)范围内。使用Crosby-Fitzgibbons系统时,观察员之间的协议达到了很高的水平(kappa = 0.63,0.63)。两种方案的观察者内重复性都更好。包含子类的Sanders系统达到了中等程度的(kappa = 0.57)一致,而忽略了子类带来的一致性达到了实质性(kappa = 0.77)范围。对于Crosby-Fitzgibbons系统,观察者内部的总体协议是实质性的(kappa = 0.74)。结论:尽管观察者内κ值达到了可观的水平,并且Crosby-Fitzgibbons系统通常显示出更大的一致性,但是我们无法用两种分类方案证明观察者间或观察者间的出色可靠性。虽然不可能实现完全一致的系统,但我们的结果表明,这些分类缺乏可重复性,无法被认为是理想的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号