首页> 外文会议>European conference on computer vision >Domain Generalization Using Shape Representation
【24h】

Domain Generalization Using Shape Representation

机译:使用形状表示的域泛化

获取原文

摘要

CNN-based representations have greatly advanced the state of the art in visual recognition, but the community has primarily focused on the setting where training and test set belong to the same dataset/distribution. However, models trained on one dataset do not generalize well to other datasets [3,5]. Human vision, which is robust to data/domain shifts, relies on shape in addition to texture/appearance, as shown in prior research. On the other hand, prior work in computer vision shows CNN representations are biased towards texture [4]. We propose a new shape-based representation which captures the medial axis transform and skeleton of an object. As shown in Fig. 1, shape is more robust to domain shifts than texture. We apply it in the domain generalization (DG) setting: methods are trained on a set of source domains, and are tested on a disjoint domain from which no data is available at training time. Unlike related prior shape work [7,8], which primarily targeted cross-modal retrieval and scene classification, our representation is denser than an edge map.
机译:基于CNN的表示在视觉识别中大大提升了最先进的状态,但社区主要专注于培训和测试集属于相同数据集/分发的环境。但是,在一个数据集上培训的模型对其他数据集没有概括为[3,5]。对于数据/域移位的人类视觉,除了纹理/外观之外,还依赖于形状,如现有研究所示。另一方面,计算机视觉中的事先工作显示CNN表示偏向纹理[4]。我们提出了一种新的基于形状的表示,其捕获了对象的内侧轴变换和骨架。如图1所示。如图1所示,形状比纹理更稳健。我们将其应用于域泛化(DG)设置:方法在一组源域上培训,并在侦听域上进行测试,从中培训时间没有数据。与相关的先前形状工作不同[7,8],主要针对跨模型检索和场景分类,我们的表示比边缘图更密集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号