【24h】

Continual Class Incremental Learning for CT Thoracic Segmentation

机译:CT胸段细分的持续阶级增量学习

获取原文

摘要

Deep learning organ segmentation approaches require large amounts of annotated training data, which is limited in supply due to reasons of confidentiality and the time required for expert manual annotation. Therefore, being able to train models incrementally without having access to previously used data is desirable. A common form of sequential training is fine tuning (FT). In this setting, a model learns a new task effectively, but loses performance on previously learned tasks. The Learning without Forgetting (LwF) approach addresses this issue via replaying its own prediction for past tasks during model training. In this work, we evaluate FT and LwF for class incremental learning in multi-organ segmentation using the publicly available AAPM dataset. We show that LwF can successfully retain knowledge on previous segmentations, however, its ability to learn a new class decreases with the addition of each class. To address this problem we propose an adversarial continual learning segmentation approach (ACLSeg), which disentangles feature space into task-specific and task-invariant features. This enables preservation of performance on past tasks and effective acquisition of new knowledge.
机译:深度学习器官分割方法需要大量的注释培训数据,这是由于机密性的原因和专家手册注释所需的时间而受到限制。因此,能够逐步训练模型而不需要访问先前使用的数据。一种常见的顺序训练形式是微调(FT)。在此设置中,模型有效地了解新任务,但在以前学习的任务中失去了性能。在没有忘记(LWF)的情况下,学习通过在模型培训期间重放其自身预测来解决此问题。在这项工作中,我们使用公开的AAPM数据集评估多器官分段中的类增量学习的FT和LWF。我们表明LWF可以成功地保留对先前分割的知识,但是,它的学习新阶级的能力随着每个类的添加而减少。为了解决这个问题,我们提出了一个对抗的持续学习分割方法(ACLSEG),其中Disentangly将具有特定于任务和任务不变的功能的特征空间。这使得能够在过去的任务上保存性能,有效地收购新知识。

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号