首页> 外文会议>International Conference on Computer Vision >Self-Supervised Representation Learning From Multi-Domain Data
【24h】

Self-Supervised Representation Learning From Multi-Domain Data

机译:从多域数据进行自我监督的表示学习

获取原文

摘要

We present an information-theoretically motivated constraint for self-supervised representation learning from multiple related domains. In contrast to previous self-supervised learning methods, our approach learns from multiple domains, which has the benefit of decreasing the build-in bias of individual domain, as well as leveraging information and allowing knowledge transfer across multiple domains. The proposed mutual information constraints encourage neural network to extract common invariant information across domains and to preserve peculiar information of each domain simultaneously. We adopt tractable upper and lower bounds of mutual information to make the proposed constraints solvable. The learned representation is more unbiased and robust toward the input images. Extensive experimental results on both multi-domain and large-scale datasets demonstrate the necessity and advantage of multi-domain self-supervised learning with mutual information constraints. Representations learned in our framework on state-of-the-art methods achieve improved performance than those learned on a single domain.
机译:我们提出了一个信息理论上的动机约束,用于从多个相关领域进行自我监督表示学习。与以前的自我监督学习方法相比,我们的方法从多个领域进行学习,其优点是减少了单个领域的内在偏见,以及利用信息并允许跨多个领域的知识转移。提出的互信息约束条件鼓励神经网络提取跨域的公共不变信息,并同时保留每个域的特殊信息。我们采用相互可控的上界和下界,以使建议的约束条件可以解决。习得的表示对于输入图像更加无偏并且更鲁棒。在多域和大规模数据集上的大量实验结果证明了在具有互信息约束的情况下进行多域自我监督学习的必要性和优势。在我们的框架中学习的有关最新方法的表示比在单个领域中学习的表示具有更高的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号