首页> 外文期刊>Pattern recognition letters >Feature extraction approaches based on matrix pattern: MatPCA and MatFLDA
【24h】

Feature extraction approaches based on matrix pattern: MatPCA and MatFLDA

机译:基于矩阵模式的特征提取方法:MatPCA和MatFLDA

获取原文
获取原文并翻译 | 示例
           

摘要

Principle component analysis (PCA) and Fisher linear discriminant analysis (FLDA), as two popular feature extraction approaches in Pattern recognition and data analysis, extract so-needed features directly based on vector patterns, i.e., before applying them, any non-vector pattern such as an image is first vectorized into a vector pattern by some technique like concatenation. However, such a vectorization has been proved not to be beneficial for image recognition due to consequences of both the algebraic feature extraction approach and 2DPCA. In this paper, inspired by the above two approaches, we try an opposite direction to extract features for any vector pattern by first matrixizing it into a matrix pattern and then applying the matrixized versions of PCA and FLDA, MatPCA and MatFLDA, to the pattern. MatFLDA uses, in essence, the same principle as the algebraic feature extraction approach and is constructed in terms of similar objective function to FLDA while MatPCA uses a minimization of the reconstructed error for the training samples like PCA to obtain a set of projection vectors, which is somewhat different derivation from 2DPCA despite of equivalence. Finally experiments on 10 publicly obtainable datasets show that both MatPCA and MatFLDA gain performance improvement in different degrees, respectively, on 7 and 5 datasets and at the same time, the computational burden of extracting features is largely reduced. In addition, it is noteworthy that the proposed approaches are still linear and the promotion of classification accuracy does not result from commonly-used non-linearization for the original linear approaches but from the simple matrixization. Furthermore, another prominent merit of matrixizing FLDA is that it can naturally break down the notorious rank limitation, that is, the number of discriminating vectors able to be found is bounded by C — 1 for C class problem, and at the same time no additional computational cost is introduced.
机译:主成分分析(PCA)和Fisher线性判别分析(FLDA)是模式识别和数据分析中两种流行的特征提取方法,它们直接基于矢量模式提取需要的特征,即在应用它们之前,先将任何非矢量模式提取出来诸如图像之类的图像首先通过某种技术(例如级联)矢量化为矢量模式。但是,由于代数特征提取方法和2DPCA的结果,已证明这种矢量化对图像识别没有好处。在本文中,受上述两种方法的启发,我们尝试通过相反的方向提取任何矢量模式的特征,方法是先将其矩阵化为矩阵模式,然后将PCA和FLDA,MatPCA和MatFLDA的矩阵化版本应用于该模式。 MatFLDA本质上使用与代数特征提取方法相同的原理,并根据与FLDA相似的目标函数构造,而MatPCA使用训练样本(如PCA)的重构误差最小化来获得一组投影矢量,尽管等效,但与2DPCA的推导有些不同。最后,在10个可公开获得的数据集上进行的实验表明,MatPCA和MatFLDA分别在7个和5个数据集上均获得了不同程度的性能提升,同时,大大减少了特征提取的计算负担。另外,值得注意的是,所提出的方法仍然是线性的,并且分类精度的提高不是源自原始线性方法的常用非线性化,而是来自简单的矩阵化。此外,对FLDA进行矩阵化的另一个突出优点是,它可以自然地打破臭名昭著的秩限制,也就是说,对于C类问题,能够找到的区分向量的数量由C_1限制,而与此同时没有其他附加限制介绍了计算成本。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号