首页> 外文会议>IEEE International Conference on Mobile Services >Orientational Spatial Part Modeling for Fine-Grained Visual Categorization
【24h】

Orientational Spatial Part Modeling for Fine-Grained Visual Categorization

机译:细粒度视觉分类的取向空间零件建模

获取原文

摘要

Although significant success has been achieved in fine-grained visual categorization, most of existing methods require bounding boxes or part annotations for training and test, resulting in limited usability and flexibility. To conquer these limitations, we aim to automatically detect the bounding box and parts for fine-grained object classification. The bounding boxes are acquired by a transferring strategy which infers the locations of objects from a set of annotated training images. Based on the generated bounding box, we propose a multiple-layer Orientational Spatial Part (OSP) model to generate a refined description for the object. Finally, we employ the output of deep Convolutional Neural Network (dCNN) as the feature and train a linear SVM as object classifier. Extensive experiments on public benchmark datasets manifest the impressive performance of our method, i.e., Classification accuracy achieves 63.9% on CUB-200-2011 and 75.6% on Aircraft, which are actually higher than many existing methods using manual annotations.
机译:虽然在细粒度的视觉分类中取得了重大成功,但大多数现有方法都需要限定框或部分注释进行培训和测试,导致有限的可用性和灵活性。为了征服这些限制,我们的目标是自动检测边界盒和零件以进行细粒度对象分类。边界框由传输策略获取,该策略从一组注释的训练图像中递送对象的位置。基于生成的边界框,我们提出了一个多层取向空间部分(OSP)模型,以生成对象的精细描述。最后,我们使用深卷积神经网络(DCNN)的输出作为特征,并将线性SVM作为对象分类器列车。对公共基准数据集的广泛实验表明了我们方法的令人印象深刻的性能,即分类准确性在Cub-200-2011和飞机上的75.6%达到63.9%,其实际上是使用手动注释的许多现有方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号