首页> 外文会议>IEEE International Conference on Computer Vision >One Shot Learning via Compositions of Meaningful Patches
【24h】

One Shot Learning via Compositions of Meaningful Patches

机译:通过有意义的补丁组成一击学习

获取原文

摘要

The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.
机译:对于人类来说,将一个物体与另一个物体区分开的任务几乎是微不足道的。但是,对于大多数现代机器学习方法,此任务在计算上很繁琐,而我们仅给出很少的学习示例就可以轻松地执行此任务。已经提出,对概念的快速掌握可能来自于新示例与先前学习的示例之间的共享知识。我们认为,一次学习的关键是共享公共部分,因为每个部分都包含有关如何构造视觉概念的大量信息。我们提出了一种无监督的方法,用于学习表示对象有意义成分的图像补丁的紧凑词典。使用这些补丁作为功能,我们构建了一个合成模型,该模型在一次学习任务中胜过许多流行的算法。我们证明了这种方法在手写数字上的有效性,并表明该模型可以推广到多个数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号