首页> 外文OA文献 >Supervised and unsupervised parallel subspace learning for large-scale image recognition
【2h】

Supervised and unsupervised parallel subspace learning for large-scale image recognition

机译:用于大规模图像识别的有监督和无监督并行子空间学习

摘要

Subspace learning is an effective and widely used image feature extraction and classification technique. However, for the large-scale image recognition issue in real-world applications, many subspace learning methods often suffer from large computational burden. In order to reduce the computational time and improve the recognition performance of subspace learning technique under this situation, we introduce the idea of parallel computing which can reduce the time complexity by splitting the original task into several subtasks. We develop a parallel subspace learning framework. In this framework, we first divide the sample set into several subsets by designing two random data division strategies that are equal data division and unequal data division. These two strategies correspond to equal and unequal computational abilities of nodes under parallel computing environment. Next, we calculate projection vectors from each subset in parallel. The graph embedding technique is employed to provide a general formulation for parallel feature extraction. After combining the extracted features from all nodes, we present a unified criterion to select most distinctive features for classification. Under the developed framework, we separately propose supervised and unsupervised parallel subspace learning approaches, which are called parallel linear discriminant analysis (PLDA) and parallel locality preserving projection (PLPP). PLDA selects features with the largest Fisher scores by estimating the weighted and unweighted sample scatter, while PLPP selects features with the smallest Laplacian scores by constructing a whole affinity matrix. Theoretically, we analyze the time complexities of proposed approaches and provide the fundamental supports for applying random division strategies. In the experiments, we establish two real parallel computing environments and employ four public image and video databases as the test data. Experimental results demonstrate that the proposed approaches outperform several related supervised and unsupervised subspace learning methods, and significantly reduce the computational time.
机译:子空间学习是一种有效且广泛使用的图像特征提取和分类技术。但是,对于实际应用中的大规模图像识别问题,许多子空间学习方法通​​常会承受较大的计算负担。为了减少这种情况下的计算时间并提高子空间学习技术的识别性能,我们引入了并行计算的思想,该思想可以通过将原始任务分成多个子任务来降低时间复杂度。我们开发了一个并行子空间学习框架。在此框架中,我们首先通过设计两种随机数据划分策略将样本集划分为几个子集,这两种策略分别是相等数据划分和不相等数据划分。这两种策略对应于并行计算环境下节点的相等和不相等的计算能力。接下来,我们并行计算每个子集的投影矢量。使用图形嵌入技术为并行特征提取提供一般公式。合并从所有节点提取的特征后,我们提出了一个统一的标准来选择最独特的特征进行分类。在已开发的框架下,我们分别提出了有监督和无监督的并行子空间学习方法,称为并行线性判别分析(PLDA)和并行局部性保留投影(PLPP)。 PLDA通过估计加权和未加权样本散点来选择具有最大Fisher分数的特征,而PLPP通过构建整个亲和矩阵来选择具有最小Laplacian分数的特征。从理论上讲,我们分析了所提出方法的时间复杂性,并为应用随机划分策略提供了基础支持。在实验中,我们建立了两个真正的并行计算环境,并使用四个公共图像和视频数据库作为测试数据。实验结果表明,所提出的方法优于几种相关的有监督和无监督子空间学习方法,并显着减少了计算时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号