首页> 外文会议>2015 IEEE International Conference on Mobile Services >Orientational Spatial Part Modeling for Fine-Grained Visual Categorization
【24h】

Orientational Spatial Part Modeling for Fine-Grained Visual Categorization

机译:细粒度视觉分类的定向空间零件建模

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Although significant success has been achieved in fine-grained visual categorization, most of existing methods require bounding boxes or part annotations for training and test, resulting in limited usability and flexibility. To conquer these limitations, we aim to automatically detect the bounding box and parts for fine-grained object classification. The bounding boxes are acquired by a transferring strategy which infers the locations of objects from a set of annotated training images. Based on the generated bounding box, we propose a multiple-layer Orientational Spatial Part (OSP) model to generate a refined description for the object. Finally, we employ the output of deep Convolutional Neural Network (dCNN) as the feature and train a linear SVM as object classifier. Extensive experiments on public benchmark datasets manifest the impressive performance of our method, i.e., Classification accuracy achieves 63.9% on CUB-200-2011 and 75.6% on Aircraft, which are actually higher than many existing methods using manual annotations.
机译:尽管在细粒度的视觉分类上已经取得了巨大的成功,但是大多数现有方法都需要边界框或零件注释来进行训练和测试,从而导致可用性和灵活性受到限制。为了克服这些限制,我们旨在自动检测边界框和零件以进行细粒度的对象分类。边界框是通过传输策略获取的,该策略从一组带注释的训练图像中推断对象的位置。基于生成的边界框,我们提出了多层定向空间部分(OSP)模型来生成对象的精确描述。最后,我们将深度卷积神经网络(dCNN)的输出用作特征,并训练线性SVM作为对象分类器。在公共基准数据集上进行的大量实验证明了我们方法的出色性能,即CUB-200-2011的分类精度达到63.9%,飞机的分类精度达到75.6%,这实际上比许多使用手动注释的方法更高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号