首页> 美国卫生研究院文献>BJR Open >Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling
【2h】

Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling

机译:辐射治疗成果建模机器学习方法的平衡准确性和可解释性

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Radiation outcomes prediction (ROP) plays an important role in personalized prescription and adaptive radiotherapy. A clinical decision may not only depend on an accurate radiation outcomes’ prediction, but also needs to be made based on an informed understanding of the relationship among patients’ characteristics, radiation response and treatment plans. As more patients’ biophysical information become available, machine learning (ML) techniques will have a great potential for improving ROP. Creating explainable ML methods is an ultimate task for clinical practice but remains a challenging one. Towards complete explainability, the interpretability of ML approaches needs to be first explored. Hence, this review focuses on the application of ML techniques for clinical adoption in radiation oncology by balancing accuracy with interpretability of the predictive model of interest. An ML algorithm can be generally classified into an interpretable (IP) or non-interpretable (NIP) (“black box”) technique. While the former may provide a clearer explanation to aid clinical decision-making, its prediction performance is generally outperformed by the latter. Therefore, great efforts and resources have been dedicated towards balancing the accuracy and the interpretability of ML approaches in ROP, but more still needs to be done. In this review, current progress to increase the accuracy for IP ML approaches is introduced, and major trends to improve the interpretability and alleviate the “black box” stigma of ML in radiation outcomes modeling are summarized. Efforts to integrate IP and NIP ML approaches to produce predictive models with higher accuracy and interpretability for ROP are also discussed.
机译:辐射结果预测(ROP)在个性化处方和适应性放射疗法中起重要作用。临床决策可能不仅取决于准确的辐射结果的预测,而且还需要基于对患者特征,辐射响应和治疗计划的关系的知识了解。随着更多患者的生物物理信息可用,机器学习(ML)技术将具有巨大改善ROP的潜力。创建可解释的ML方法是临床实践的最终任务,但仍然是一个具有挑战性的任务。为了完全解释性,需要首先探索ML方法的可解释性。因此,本综述侧重于通过平衡利益预测模型的可解释性的准确性来应用M1技术在辐射肿瘤学中的临床采用。 ML算法通常可以分为可解释(IP)或非可解释(NIP)(“黑匣子”)技术。虽然前者可以提供更清晰的解释来辅助临床决策,但其预测性能通常由后者呈现出优势。因此,巨大的努力和资源一直致力于平衡ROP中ML方法的准确性和可解释性,但仍然需要进行更多。在本文中,引入了提高IP ML方法准确性的当前进展,并综述了提高解释性和缓解辐射结果建模中ML的“黑匣子”耻辱的主要趋势。还讨论了集成IP和NIP ML方法以生产具有更高准确性和ROP解释性的预测模型的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号