首页> 外文学位 >Efficient learning of optimal linear representations for object classification.
【24h】

Efficient learning of optimal linear representations for object classification.

机译:有效学习用于对象分类的最佳线性表示形式。

获取原文
获取原文并翻译 | 示例

摘要

In many pattern classification problems, efficiently learning a suitable low-dimensional representation of high dimensional data is essential. The advantages of linear dimension reduction methods are their simplicity and efficiency. Optimal component analysis (OCA) is a recently proposed linear dimensional reduction method which seeks to optimize the discriminant ability of the nearest neighbor classifier for data classification and labeling. Mathematically, OCA defines an objective function which aims to discriminatively separate data in different classes and an optimal basis is obtained through a stochastic gradient search on the underlying Grassmann manifold. OCA shows good performance in various applications including face recognition, object recognition, and image retrieval. However, a limitation of OCA is that the computational complexity is high, which prevents its wide usage in real applications. In this dissertation, several efficient methods, including two-stage OCA, multi-stage OCA, scalable OCA, and two-stage sphere factor analysis (SFA), have been proposed to cope with this problem and achieve both efficiency and accuracy. Two-stage and multi-stage OCA aim to speed up the OCA search by reducing the dimension of the search space; scalable OCA uses a more efficient gradient updating method to reduce the computational complexity of OCA; two-stage SFA first reduces the search space and then the optimal basis is searched on a simpler geometrical manifold than that of OCA. Furthermore, a sparse OCA method is also proposed by adding sparseness constraints to OCA. Additionally, an application of the efficient OCA methods on rapid classification tree is also presented. Experimental results on face and object classification show these methods achieve efficiency and discrimination simultaneously.
机译:在许多模式分类问题中,有效学习高维数据的合适低维表示形式至关重要。线性降维方法的优势在于其简单性和效率。最优成分分析(OCA)是最近提出的线性降维方法,旨在优化最近邻分类器的判别能力,以进行数据分类和标记。在数学上,OCA定义了一个目标函数,该目标函数旨在有区别地区分不同类别的数据,并且通过对基础的Grassmann流形进行随机梯度搜索来获得最佳基础。 OCA在包括脸部识别,对象识别和图像检索在内的各种应用程序中显示出良好的性能。但是,OCA的局限性在于计算复杂度高,这妨碍了它在实际应用中的广泛使用。本文提出了两阶段OCA,多阶段OCA,可扩展OCA和两阶段球因子分析(SFA)等几种有效方法来解决这一问题,并达到效率和准确性。两阶段和多阶段OCA旨在通过减小搜索空间的大小来加快OCA搜索;可扩展的OCA使用更有效的梯度更新方法来降低OCA的计算复杂度;两阶段SFA首先减少了搜索空间,然后以比OCA更简单的几何流形搜索最优基础。此外,还提出了一种通过向OCA添加稀疏约束的稀疏OCA方法。此外,还介绍了有效的OCA方法在快速分类树上的应用。在人脸和物体分类上的实验结果表明,这些方法可以同时实现效率和区分度。

著录项

  • 作者

    Wu, Yiming.;

  • 作者单位

    The Florida State University.;

  • 授予单位 The Florida State University.;
  • 学科 Engineering Robotics.;Computer Science.;Artificial Intelligence.
  • 学位 Ph.D.
  • 年度 2010
  • 页码 125 p.
  • 总页数 125
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号