首页> 外文期刊>International Journal of Computer Vision >Local Alignments for Fine-Grained Categorization
【24h】

Local Alignments for Fine-Grained Categorization

机译:用于细粒度分类的局部对齐

获取原文
获取原文并翻译 | 示例
           

摘要

The aim of this paper is fine-grained categorization without human interaction. Different from prior work, which relies on detectors for specific object parts, we propose to localize distinctive details by roughly aligning the objects using just the overall shape. Then, one may proceed to the classification by examining the corresponding regions of the alignments. More specifically, the alignments are used to transfer part annotations from training images to unseen images (supervised alignment), or to blindly yet consistently segment the object in a number of regions (unsupervised alignment). We further argue that for the distinction of subclasses, distribution-based features like color Fisher vectors are better suited for describing localized appearance of fine-grained categories than popular matching oriented shape-sensitive features, like HOG. They allow capturing the subtle local differences between subclasses, while at the same time being robust to misalignments between distinctive details. We evaluate the local alignments on the CUB-2011 and on the Stanford Dogs datasets, composed of 200 and 120, visually very hard to distinguish bird and dog species. In our experiments we study and show the benefit of the color Fisher vector parameterization, the influence of the alignment partitioning, and the significance of object segmentation on fine-grained categorization. We, furthermore, show that by using object detectors as voters to generate object confidence saliency maps, we arrive at fully unsupervised, yet highly accurate fine-grained categorization. The proposed local alignments set a new state-of-the-art on both the fine-grained birds and dogs datasets, even without any human intervention. What is more, the local alignments reveal what appearance details are most decisive per fine-grained object category.
机译:本文的目的是在没有人为干预的情况下进行细分类。与先前的工作依赖于特定对象部分的检测器不同,我们建议通过仅使用整体形状大致对齐对象来定位独特的细节。然后,可以通过检查比对的相应区域来进行分类。更具体地说,这些对齐方式用于将零件注释从训练图像转移到看不见的图像(监督对齐),或在多个区域盲目一致地分割对象(监督监督对齐)。我们进一步争辩说,为了区分子类,基于颜色的Fisher向量等基于分布的特征比像HOG这样的流行的面向匹配的形状敏感特征更适合描述细粒度类别的局部外观。它们可以捕获子类之间的细微局部差异,同时还可以抵抗独特细节之间的错位。我们评估了CUB-2011和斯坦福犬(Stanford Dogs)数据集(由200和120组成)的局部比对,在视觉上很难区分鸟类和犬种。在我们的实验中,我们研究并显示了彩色Fisher向量参数化的好处,对齐分区的影响以及对象分割对细粒度分类的重要性。此外,我们还表明,通过使用对象检测器作为投票者来生成对象置信度显着性图,我们可以得出完全不受监督的,但非常精确的细粒度分类。拟议的局部比对为细粒鸟类和狗的数据集设置了新的最新技术,即使没有任何人工干预也是如此。此外,局部对齐方式揭示了每个细粒度对象类别中最重要的外观细节。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号