【24h】

Expected Exponential Loss for Gaze-Based Video and Volume Ground Truth Annotation

机译:基于凝视的视频和批量地理注释的预期指数损失

获取原文
获取外文期刊封面目录资料

摘要

Many recent machine learning approaches used in medical imaging are highly reliant on large amounts of image and ground truth data. In the context of object segmentation, pixel-wise annotations are extremely expensive to collect, especially in video and 3D volumes. To reduce this annotation burden, we propose a novel framework to allow annotators to simply observe the object to segment and record where they have looked at with a $200 eye gaze tracker. Our method then estimates pixel-wise probabilities for the presence of the object throughout the sequence from which we train a classifier in semi-supervised setting using a novel Expected Exponential loss function. We show that our framework provides superior performances on a wide range of medical image settings compared to existing strategies and that our method can be combined with current crowd-sourcing paradigms as well.
机译:医学成像中使用的许多最近的机器学习方法在大量的图像和地面真理数据上具有高度依赖性。在对象分割的背景下,像素 - 明智的注释可以收集非常昂贵,尤其是在视频和3D卷中。为了减少这种注释负担,我们提出了一种新颖的框架来允许注释器简单地观察到分段和记录的对象,他们使用200美元的眼睛凝视跟踪器。然后,我们的方法估计在通过新颖的预期指数损失函数中训练在半监督设置中训练分类器的序列中的对象的像素明智概率。我们表明,与现有策略相比,我们的框架在广泛的医学图像设置上提供了卓越的表现,并且我们的方法也可以与当前人群采购范例相结合。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号