首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Generalizing Hand Segmentation in Egocentric Videos With Uncertainty-Guided Model Adaptation
【24h】

Generalizing Hand Segmentation in Egocentric Videos With Uncertainty-Guided Model Adaptation

机译:具有不确定性指导的模型自适应在以自我为中心的视频中进行手分割

获取原文

摘要

Although the performance of hand segmentation in egocentric videos has been significantly improved by using CNNs, it still remains a challenging issue to generalize the trained models to new domains, e.g., unseen environments. In this work, we solve the hand segmentation generalization problem without requiring segmentation labels in the target domain. To this end, we propose a Bayesian CNN-based model adaptation framework for hand segmentation, which introduces and considers two key factors: 1) prediction uncertainty when the model is applied in a new domain and 2) common information about hand shapes shared across domains. Consequently, we propose an iterative self-training method for hand segmentation in the new domain, which is guided by the model uncertainty estimated by a Bayesian CNN. We further use an adversarial component in our framework to utilize shared information about hand shapes to constrain the model adaptation process. Experiments on multiple egocentric datasets show that the proposed method significantly improves the generalization performance of hand segmentation.
机译:尽管通过使用CNN已大大改善了以自我为中心的视频中手部分割的性能,但是将训练后的模型推广到新领域(例如看不见的环境)仍然是一个具有挑战性的问题。在这项工作中,我们解决了手分割归纳问题,而无需在目标域中使用分割标签。为此,我们提出了一种基于贝叶斯CNN的手部分割模型自适应框架,该框架引入并考虑了两个关键因素:1)在新域中应用模型时的预测不确定性; 2)跨域共享的手形通用信息。因此,在贝叶斯CNN估计的模型不确定性的指导下,我们提出了一种在新域中进行手部分割的迭代自训练方法。我们在框架中进一步使用对抗组件,以利用有关手形的共享信息来约束模型适应过程。在多个以自我为中心的数据集上的实验表明,该方法显着提高了手分割的泛化性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号