首页> 外文期刊>Spine >Interobserver and intraobserver reliability in the load sharing classification of the assessment of thoracolumbar burst fractures.
【24h】

Interobserver and intraobserver reliability in the load sharing classification of the assessment of thoracolumbar burst fractures.

机译:胸腰椎爆裂性骨折的负荷分担分类中观察者间和观察者内可靠性。

获取原文
获取原文并翻译 | 示例
           

摘要

STUDY DESIGN: The Load Sharing Classification of spinal fractures was evaluated by 5 observers on 2 occasions. OBJECTIVE: To evaluate the interobserver and intraobserver reliability of the Load Sharing Classification of spinal fractures in the assessment of thoracolumbar burst fractures. SUMMARY OF BACKGROUND DATA: The Load Sharing Classification of spinal fractures provides a basis for the choice of operative approaches, but the reliability of this classification system has not been established. METHODS: The radiographic and computed tomography scan images of 45 consecutive patients with thoracolumbar burst fractures were reviewed by 5 observers on 2 different occasions 3 months apart. Interobserver reliability was assessed by comparison of the fracture classifications determined by the 5 observers. Intraobserver reliability was evaluated by comparison of the classifications determined by each observer on the first and second sessions. Ten paired interobserver and 5 intraobserver comparisons were thenanalyzed with use of kappa statistics. RESULTS: All 5 observers agreed on the final classification for 58% and 73% of the fractures on the first and second assessments, respectively. The average kappa coefficient for the 10 paired comparisons among the 5 observers was 0.79 (range 0.73-0.89) for the first assessment and 0.84 (range 0.81-0.95) for the second assessment. Interobserver agreement improved when the 3 components of the classification system were analyzed separately, reaching an almost perfect interobserver reliability with the average kappa values of 0.90 (range 0.82-0.97) for the first assessment and 0.92 (range 0.83-1) for the second assessment. The kappa values for the 5 intraobserver comparisons ranged from 0.73 to 0.87 (average 0.78), expressing at least substantial agreement; 2 observers showed almost perfect intraobserver reliability. For the 3 components of the classification system, all observers reached almost perfect intraobserver agreement with the kappa values of 0.83 to 0.97 (average, 0.89). CONCLUSIONS: Kappa statistics showed high levels of agreement when the Load Sharing Classification was used to assess thoracolumbar burst fractures. This system can be applied with excellent reliability.
机译:研究设计:5位观察员在2次评估中对脊柱骨折的负荷分担分类进行了评估。目的:在评估胸腰椎爆裂骨折时,评估脊柱骨折负荷分担分类的观察者间和观察者内可靠性。背景数据摘要:脊柱骨折的负荷分担分类为选择手术方法提供了基础,但是该分类系统的可靠性尚未建立。方法:由5名观察员在3个相隔3个月的不同场合对5例观察者连续回顾了45例胸腰椎爆裂性骨折患者的X线计算机断层扫描和CT扫描图像。观察者间的可靠性通过比较5位观察者确定的骨折分类进行评估。通过比较第一和第二届会议上每个观察员确定的分类来评估观察员内部的可靠性。然后使用kappa统计量分析了十对配对的观察者间比较和5观察者内比较。结果:所有5位观察者在第一次和第二次评估中均同意分别对58%和73%的骨折进行最终分类。 5个观察者中10个配对比较的平均kappa系数在第一次评估中为0.79(范围0.73-0.89),在第二次评估中为0.84(范围0.81-0.95)。分别分析分类系统的三个组成部分后,观察者之间的一致性得到改善,观察者之间的可靠性几乎达到完美,第一次评估的平均kappa值为0.90(范围0.82-0.97),第二次评估的平均kappa值为0.92(范围0.83-1)。 。 5次观察者内部比较的kappa值在0.73至0.87(平均0.78)的范围内,表示至少基本一致; 2个观察者显示了几乎完美的观察者内部可靠性。对于分类系统的三个组成部分,所有观察者都达成了几乎完美的观察者内部一致性,kappa值为0.83至0.97(平均值为0.89)。结论:当使用负荷分担分类评估胸腰椎爆裂骨折时,卡帕统计数据显示出较高的一致性。该系统可以出色的可靠性应用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号