首页> 外文会议>Computer Vision (ICCV), 2011 IEEE International Conference on >Efficient learning of sparse, distributed, convolutional feature representations for object recognition
【24h】

Efficient learning of sparse, distributed, convolutional feature representations for object recognition

机译:高效学习稀疏,分布式,卷积特征表示以进行对象识别

获取原文
获取原文并翻译 | 示例

摘要

Informative image representations are important in achieving state-of-the-art performance in object recognition tasks. Among feature learning algorithms that are used to develop image representations, restricted Boltzmann machines (RBMs) have good expressive power and build effective representations. However, the difficulty of training RBMs has been a barrier to their wide use. To address this difficulty, we show the connections between mixture models and RBMs and present an efficient training method for RBMs that utilize these connections. To the best of our knowledge, this is the first work showing that RBMs can be trained with almost no hyperparameter tuning to provide classification performance similar to or significantly better than mixture models (e.g., Gaussian mixture models). Along with this efficient training, we evaluate the importance of convolutional training that can capture a larger spatial context with less redundancy, as compared to non-convolutional training. Overall, our method achieves state-of-the-art performance on both Caltech 101 / 256 datasets using a single type of feature.
机译:信息图像表示对于在对象识别任务中实现最新性能至关重要。在用于开发图像表示的特征学习算法中,受限的Boltzmann机器(RBM)具有良好的表达能力并可以建立有效的表示。然而,训练成果管理制的困难一直是其广泛使用的障碍。为了解决这个困难,我们展示了混合模型和RBM之间的联系,并提出了利用这些联系的RBM的有效训练方法。据我们所知,这是第一项工作,表明可以在几乎没有超参数调整的情况下训练RBM,以提供类似于或明显优于混合模型(例如,高斯混合模型)的分类性能。伴随着这种有效的训练,我们评估了与非卷积训练相比,卷积训练的重要性,该训练可以捕获更大的空间上下文,且冗余度更低。总体而言,我们的方法使用单一类型的功能就可以在Caltech 101/256数据集上实现最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号