首页> 外文期刊>Neural processing letters >Kernel Local Sparse Representation Based Classifier
【24h】

Kernel Local Sparse Representation Based Classifier

机译:基于核局部稀疏表示的分类器

获取原文
获取原文并翻译 | 示例

摘要

Sparse representation-based classification (SRC) and its kernel extension methods have shown good classification performance. However, two drawbacks still exist in these classification methods: (1) These methods adopt a -minimization problem to achieve an approximate solution of sparse representation that is originally defined as a -norm optimization problem, which may lead to an increase in the average classification error. (2) These methods employ linear programming, second-order cone programming or unconstrained quadratic programming algorithm to solve the -minimization problem, whose computing time increases rapidly with the number of training samples. In this paper, I incorporate the idea of manifold learning into kernel extension methods of SRC, and propose a novel classification approach, named kernel local sparse representation-based classifier (KLSRC). In the kernel feature space, KLSRC represents a target sample as a linear combination of merely a few nearby training samples, which is called a kernel local sparse representation (KLSR). And then the target sample is assigned to the class that minimizes the residual between itself and the partial KLSR constructed by its training neighbors from this class. Experimental results demonstrate the effectiveness of the proposed classifier.
机译:基于稀疏表示的分类(SRC)及其内核扩展方法已显示出良好的分类性能。但是,这些分类方法仍然存在两个缺点:(1)这些方法采用-minimization问题来实现最初定义为-norm优化问题的稀疏表示的近似解,这可能导致平均分类的增加错误。 (2)这些方法采用线性规划,二阶锥规划或无约束二次规划算法来解决-最小化问题,其计算时间随着训练样本数量的增加而迅速增加。在本文中,我将流形学习的思想纳入SRC的内核扩展方法中,并提出了一种新颖的分类方法,即基于内核局部稀疏表示的分类器(KLSRC)。在内核特征空间中,KLSRC将目标样本表示为仅几个附近训练样本的线性组合,称为内核局部稀疏表示(KLSR)。然后将目标样本分配给该类别,该类别将其自身与该类别的训练邻居构建的部分KLSR之间的残差最小化。实验结果证明了该分类器的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号