首页> 外文期刊>Quality Control, Transactions >Exploring Category Attention for Open Set Domain Adaptation
【24h】

Exploring Category Attention for Open Set Domain Adaptation

机译:开放式域适应的类别注意力

获取原文
获取原文并翻译 | 示例
       

摘要

Great success has been achieved in the area of unsupervised domain adaptation, which learns to generalize from the labeled source domain to the unlabeled target domain. However, most of the existing techniques can only handle the closed-set scenario, which requires both the source domain and the target domain to have a shared category label set. In this work, we propose a two-stage method to deal with the more challenging task of open set domain adaptation, where the target domain contains categories unseen to the source domain. Our first stage formulates the alignment of two domains as a semi-supervised clustering problem, and initially associates each target-domain sample $x^{t}in mathcal {X}^{t} $ with a source-domain category label $ell ^{s} in mathcal {L}^{s}$ . To this end, we use the self-training strategy to learn a teacher network and a student network , both of which adopt the self-attention mechanism. Our second stage refines the resulting clusters by identifying the negative associations $(x^{t}, ell ^{s})$ and labeling the involved $x^{t} $ as unknown . For this purpose, we investigate the compatibility of each association by replacing the self-attention maps in the last convolutional layers with the newly proposed category attention maps (CAMs), which locate the informative feature pixels for a given category. Experimental results on three public datasets show the effectiveness and robustness our method in adaptation across various domain pairs.
机译:在无监督的域适应领域,已经取得了巨大的成功,这将从标记的源域中概括到未标记的目标域。但是,大多数现有技术只能处理闭合方案,该方案需要源域和目标域都有共享类别标签集。在这项工作中,我们提出了一种两阶段方法来处理开放集域适应的更具挑战性的任务,目标域包含对源域中的类别。我们的第一阶段将两个域的对齐方式作为半监督聚类问题,并且最初将每个目标域样本<内联公式XMLNS:MML =“http://www.w3.org/1998/math/mathml”相关联XMLNS:XLink =“http://www.w3.org/1999/xlink”> $ x ^ {t} in mathcal {x} ^ {t} $ 带源域类别标签<内联公式XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http:/ /www.w3.org/1999/xlink“> $ ell ^ {s} in mathcal {l} ^ {s} $ 。为此,我们使用自我培训策略来学习<斜体XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3。 ORG / 1999 / XLINK“>教师网络和<斜体XMLNS:MML =”http://www.w3.org/1998/math/mathml“xmlns:xlink =”http://www.w3 .org / 1999 / xlink“>学生网络,两者都采用自我关注机制。我们的第二阶段通过识别负关联<内联公式XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3来改进生成的群集。 ORG / 1999 / XLINK“> $(x ^ {t}, ell ^ {s})$ 并标记所涉及的<内联XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”> $ x ^ {t} $ 作为<斜体xmlns:mml =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”>未知。为此目的,我们通过新提议的<斜视XMLNS:MML =“http://www.w3.org/1998/math/mathml”来调查每个关联的自我关注映射,通过替换最后一个卷积层中的自我关注映射来调查每个关联的兼容性XMLNS:XLink =“http://www.w3.org/1999/xlink”>类别注意映射(凸轮),用于为给定类别找到信息性的信息像素。三个公共数据集的实验结果表明了我们在各个域对的适应方面的有效性和稳健性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号