首页> 外文期刊>Spine >External Validation of SpineNet, an Open-Source Deep Learning Model for Grading Lumbar Disk Degeneration MRI Features, Using the Northern Finland Birth Cohort 1966
【24h】

External Validation of SpineNet, an Open-Source Deep Learning Model for Grading Lumbar Disk Degeneration MRI Features, Using the Northern Finland Birth Cohort 1966

机译:External Validation of SpineNet, an Open-Source Deep Learning Model for Grading Lumbar Disk Degeneration MRI Features, Using the Northern Finland Birth Cohort 1966

获取原文
获取原文并翻译 | 示例
           

摘要

Study Design.This is a retrospective observational study to externally validate a deep learning image classification model. Objective.Deep learning models such as SpineNet offer the possibility of automating the process of disk degeneration (DD) classification from magnetic resonance imaging (MRI). External validation is an essential step to their development. The aim of this study was to externally validate SpineNet predictions for DD using Pfirrmann classification and Modic changes (MCs) on data from the Northern Finland Birth Cohort 1966 (NFBC1966). Summary of Data.We validated SpineNet using data from 1331 NFBC1966 participants for whom both lumbar spine MRI data and consensus DD gradings were available. Materials and Methods.SpineNet returned Pfirrmann grade and MC presence from T2-weighted sagittal lumbar MRI sequences from NFBC1966, a data set geographically and temporally separated from its training data set. A range of agreement and reliability metrics were used to compare predictions with expert radiologists. Subsets of data that match SpineNet training data more closely were also tested. Results.Balanced accuracy for DD was 78 (77-79) and for MC 86 (85-86). Interrater reliability for Pfirrmann grading was Lin concordance correlation coefficient=0.86 (0.85-0.87) and Cohen kappa=0.68 (0.67-0.69). In a low back pain subset, these reliability metrics remained largely unchanged. In total, 20.83 of disks were rated differently by SpineNet compared with the human raters, but only 0.85 of disks had a grade difference >1. Interrater reliability for MC detection was kappa=0.74 (0.72-0.75). In the low back pain subset, this metric was almost unchanged at kappa=0.76 (0.73-0.79). Conclusions.In this study, SpineNet has been benchmarked against expert human raters in the research setting. It has matched human reliability and demonstrates robust performance despite the multiple challenges facing model generalizability.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号