首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Learning coarse-to-fine sparselets for efficient object detection and scene classification
【24h】

Learning coarse-to-fine sparselets for efficient object detection and scene classification

机译:学习粗细的晶状体,用于有效的物体检测和场景分类

获取原文

摘要

Part model-based methods have been successfully applied to object detection and scene classification and have achieved state-of-the-art results. More recently the “sparselets” work [1-3] were introduced to serve as a universal set of shared basis learned from a large number of part detectors, resulting in notable speedup. Inspired by this framework, in this paper, we propose a novel scheme to train more effective sparselets with a coarse-to-fine framework. Specifically, we first train coarse sparselets to exploit the redundancy existing among part detectors by using an unsupervised single-hidden-layer auto-encoder. Then, we simultaneously train fine sparselets and activation vectors using a supervised single-hidden-layer neural network, in which sparselets training and discriminative activation vectors learning are jointly embedded into a unified framework. In order to adequately explore the discriminative information hidden in the part detectors and to achieve sparsity, we propose to optimize a new discriminative objective function by imposing L0-norm sparsity constraint on the activation vectors. By using the proposed framework, promising results for multi-class object detection and scene classification are achieved on PASCAL VOC 2007, MIT Scene-67, and UC Merced Land Use datasets, compared with the existing sparselets baseline methods.
机译:基于部分模型的方法已成功应用于对象检测和场景分类,并实现了最先进的结果。最近,介绍了“SparareLets”工作[1-3]作为从大量零件探测器中学到的普遍共享基础,导致了显着的加速。在本文的灵感上,我们提出了一种新颖的方案,以培养更有效的骨质术骨架。具体而言,我们首先通过使用无监督的单隐层自动编码器来利用零件检测器中存在的冗余来利用粗糙的Sparselets。然后,我们同时使用监督的单隐藏的神经网络培训精细的开石和激活载体,其中开石训练和鉴别的激活向量学习共同嵌入到统一的框架中。为了充分探讨隐藏在部件探测器中并实现稀疏性的辨别信息,我们建议通过对激活载体施加L0-NOM稀疏限制来优化新的鉴别目标函数。通过使用所提出的框架,与现有的SparareLets基线方法相比,Pascal VOC 2007,MIT Scent-67和UC Merced Land使用数据集实现了多类对象检测和场景分类的有希望的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号