首页> 外文期刊>Geoscience and Remote Sensing, IEEE Transactions on >Saliency-Guided Unsupervised Feature Learning for Scene Classification
【24h】

Saliency-Guided Unsupervised Feature Learning for Scene Classification

机译:用于场景分类的显着性无监督特征学习

获取原文
获取原文并翻译 | 示例

摘要

Due to the rapid technological development of various different satellite sensors, a huge volume of high-resolution image data sets can now be acquired. How to efficiently represent and recognize the scenes from such high-resolution image data has become a critical task. In this paper, we propose an unsupervised feature learning framework for scene classification. By using the saliency detection algorithm, we extract a representative set of patches from the salient regions in the image data set. These unlabeled data patches are exploited by an unsupervised feature learning method to learn a set of feature extractors which are robust and efficient and do not need elaborately designed descriptors such as the scale-invariant-feature-transform-based algorithm. We show that the statistics generated from the learned feature extractors can characterize a complex scene very well and can produce excellent classification accuracy. In order to reduce overfitting in the feature learning step, we further employ a recently developed regularization method called “dropout,” which has proved to be very effective in image classification. In the experiments, the proposed method was applied to two challenging high-resolution data sets: the UC Merced data set containing 21 different aerial scene categories with a submeter resolution and the Sydney data set containing seven land-use categories with a 60-cm spatial resolution. The proposed method obtained results that were equal to or even better than the previous best results with the UC Merced data set, and it also obtained the highest accuracy with the Sydney data set, demonstrating that the proposed unsupervised-feature-learning-based scene classification method provides more accurate classification results than the other latent-Dirichlet-allocation-based methods and the sparse coding method.
机译:由于各种卫星传感器技术的飞速发展,现在可以获取大量的高分辨率图像数据集。如何从这样的高分辨率图像数据有效地表示和识别场景已经成为关键任务。在本文中,我们提出了一种用于场景分类的无监督特征学习框架。通过使用显着性检测算法,我们从图像数据集中的显着区域提取了一组代表性的补丁。这些未标记的数据块被一种无监督的特征学习方法所利用,以学习一组强大而高效的特征提取器,并且不需要精心设计的描述符(例如,基于尺度不变特征变换的算法)。我们表明,从学习到的特征提取器生成的统计数据可以很好地刻画复杂场景,并可以产生出色的分类精度。为了减少特征学习步骤中的过拟合,我们进一步采用了一种新近开发的正则化方法,称为“辍学”,已证明在图像分类中非常有效。在实验中,所提出的方法应用于两个具有挑战性的高分辨率数据集:UC Merced数据集包含21个亚米级分辨率的空中场景类别,Sydney数据集包含7个土地使用类别且空间为60厘米解析度。所提出的方法所获得的结果与UC Merced数据集的结果相同,甚至优于先前的最佳结果,并且它在Sydney数据集上也获得了最高的准确度,证明了所提出的基于无监督功能学习的场景分类该方法提供的分类结果比其他基于潜在狄利克雷分配的方法和稀疏编码方法更为准确。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号