...
首页> 外文期刊>PLoS One >Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome
【24h】

Opening the black box of artificial intelligence for clinical decision support: A study predicting stroke outcome

机译:打开黑匣子的临床决策支持:预测中风结果的研究

获取原文
           

摘要

State-of-the-art machine learning (ML) artificial intelligence methods are increasingly leveraged in clinical predictive modeling to provide clinical decision support systems to physicians. Modern ML approaches such as artificial neural networks (ANNs) and tree boosting often perform better than more traditional methods like logistic regression. On the other hand, these modern methods yield a limited understanding of the resulting predictions. However, in the medical domain, understanding of applied models is essential, in particular, when informing clinical decision support. Thus, in recent years, interpretability methods for modern ML methods have emerged to potentially allow explainable predictions paired with high performance. To our knowledge, we present in this work the first explainability comparison of two modern ML methods, tree boosting and multilayer perceptrons (MLPs), to traditional logistic regression methods using a stroke outcome prediction paradigm. Here, we used clinical features to predict a dichotomized 90 days post-stroke modified Rankin Scale (mRS) score. For interpretability, we evaluated clinical features’ importance with regard to predictions using deep Taylor decomposition for MLP, Shapley values for tree boosting and model coefficients for logistic regression. With regard to performance as measured by Area under the Curve (AUC) values on the test dataset, all models performed comparably: Logistic regression AUCs were 0.83, 0.83, 0.81 for three different regularization schemes; tree boosting AUC was 0.81; MLP AUC was 0.83. Importantly, the interpretability analysis demonstrated consistent results across models by rating age and stroke severity consecutively amongst the most important predictive features. For less important features, some differences were observed between the methods. Our analysis suggests that modern machine learning methods can provide explainability which is compatible with domain knowledge interpretation and traditional method rankings. Future work should focus on replication of these findings in other datasets and further testing of different explainability methods.
机译:最先进的机器学习(ML)人工智能方法越来越多地利用临床预测模型,为医生提供临床决策支持系统。人工神经网络(ANNS)和树增强等现代ML方法通常比Logistic回归等传统方法更好。另一方面,这些现代方法会产生有限的理解所产生的预测。然而,在医学领域,在通知临床决策支持时,对应用模型的理解至关重要。因此,近年来,已经出现了现代ML方法的可解释方法,以潜在地允许可解释的预测与高性能配对。为了我们的知识,我们在这项工作中展示了使用中风结果预测范式的传统逻辑回归方法的第一个解释性比较。在这里,我们使用临床特征来预测行程后90天的二分层修改的Rankin规模(MRS)得分。对于可解释性,我们评估了使用深泰勒分解对MLP的预测的临床特征,对Lopting Resoustion的树木升压和模型系数进行了MLP的预测。关于按照测试数据集的曲线(AUC)值下的面积测量的性能,所有型号相当:逻辑回归AUC为0.83,0.83,0.81,用于三种不同的正则化方案;树增强AUC为0.81; MLP AUC为0.83。重要的是,解释性分析通过评级年龄和行程在最重要的预测特征中,通过评级年龄和行程严重程度证明了一致的结果。对于不太重要的特征,在这些方法之间观察到一些差异。我们的分析表明,现代机器学习方法可以提供与域知识解释和传统方法排名兼容的解释性。未来的工作应专注于在其他数据集中的这些发现的复制以及对不同解释性方法的进一步测试。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号