首页> 外文会议>International Conference on Computer Vision >Learning Compositional Representations for Few-Shot Recognition
【24h】

Learning Compositional Representations for Few-Shot Recognition

机译:学习成分表示以进行少量识别

获取原文

摘要

One of the key limitations of modern deep learning approaches lies in the amount of data required to train them. Humans, by contrast, can learn to recognize novel categories from just a few examples. Instrumental to this rapid learning ability is the compositional structure of concept representations in the human brain --- something that deep learning models are lacking. In this work, we make a step towards bridging this gap between human and machine learning by introducing a simple regularization technique that allows the learned representation to be decomposable into parts. Our method uses category-level attribute annotations to disentangle the feature space of a network into subspaces corresponding to the attributes. These attributes can be either purely visual, like object parts, or more abstract, like openness and symmetry. We demonstrate the value of compositional representations on three datasets: CUB-200-2011, SUN397, and ImageNet, and show that they require fewer examples to learn classifiers for novel categories.
机译:现代深度学习方法的主要限制之一在于训练它们所需的数据量。相比之下,人类仅通过几个例子就可以学会识别新颖的类别。快速学习能力的关键是人脑中概念表示的组成结构-缺少深度学习模型。在这项工作中,我们迈出了一步,通过引入一种简单的正则化技术来弥合人类学习与机器学习之间的鸿沟,该技术可将学习到的表示分解为各个部分。我们的方法使用类别级别的属性注释将网络的特征空间分解为与属性相对应的子空间。这些属性可以是纯粹的视觉对象(如对象部分),也可以是抽象的(如开放性和对称性)。我们在三个数据集上展示了合成表示的价值:CUB-200-2011,SUN397和ImageNet,并表明它们需要更少的示例来学习新颖类别的分类器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号