首页> 外文会议>AAAI Conference on Artificial Intelligence >Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images (Student Abstract)
【24h】

Self-Supervised, Semi-Supervised, Multi-Context Learning for the Combined Classification and Segmentation of Medical Images (Student Abstract)

机译:用于医学图像的组合分类和分割的自我监督,半监督,多语境学习(学生摘要)

获取原文

摘要

To tackle the problem of limited annotated data, semi-supervised learning is attracting attention as an alternative to fully supervised models. Moreover, optimizing a multiple-task model to learn "multiple contexts" can provide better generalizability compared to single-task models. We propose a novel semi-supervised multiple-task model leveraging self-supervision and adversarial training - namely, self-supervised, semi-supervised, multi-context learning (S~4MCL) - and apply it to two crucial medical imaging tasks, classification and segmentation. Our experiments on spine X-rays reveal that the S~4MCL model significantly outperforms semi-supervised single-task, semi-supervised multi-context, and fully-supervised single-task models, even with a 50% reduction of classification and segmentation labels.
机译:为了解决有限的注释数据问题,半监督学习是吸引注意力作为完全监督模型的替代品。 此外,与单任务模型相比,优化以学习“多个上下文”的多项任务模型可以提供更好的相互性。 我们提出了一种新颖的半监督多项任务模型,利用自我监督和对抗性培训 - 即自我监督,半监督,多语境学习(S〜4MCL) - 并将其应用于两个至关重要的医学成像任务,分类 和细分。 我们对脊柱X射线的实验表明,S〜4MCL模型显着优于半监督单任务,半监督的多语境和完全监督的单任务模型,即使分类和分割标签的减少50% 。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号