首页> 外文期刊>Neurocomputing >A Novel multiple kernel-based dictionary learning for distributive and collective sparse representation based classifiers
【24h】

A Novel multiple kernel-based dictionary learning for distributive and collective sparse representation based classifiers

机译:一种新颖的基于多核的字典学习方法,用于基于分布和集体稀疏表示的分类器

获取原文
获取原文并翻译 | 示例
       

摘要

In recent years, sparse representation theory has attracted the attention of many researchers in the signal processing, pattern recognition and computer vision communities. The choice of dictionary matrix plays a key role in the sparse representation based methods. It can be a pre-defined dictionary or can be learned via an optimization procedure. Furthermore, the dictionary learning process can be extended to a non-linear setting using an appropriate kernel function in order to handle non-linear structured data. In this framework, the choice of kernel function is also a key step. Multiple kernel learning is an appealing strategy for dealing with this problem. In this paper, within the framework of kernel sparse representation based classification, we propose an iterative algorithm for coincident learning of the dictionary matrix and multiple kernel function. The weighted sum of a set of basis functions is considered as the multiple kernel function where the weights are optimized such that the reconstruction error of the sparse coded data is minimized. In our proposed algorithm, the sparse coding, dictionary learning and multiple kernel learning processes are performed in three steps. The optimization process is performed considering two different structures namely distributive and collective for the sparse representation based classifier. Our experimental results show that the proposed algorithm outperforms the other existing sparse coding based approaches. These results also confirm that the collective setting leads to better results when the number of training examples is limited. On the other hand, the distributive setting is more appropriate when there are enough training samples.
机译:近年来,稀疏表示理论吸引了信号处理,模式识别和计算机视觉界的许多研究人员的关注。字典矩阵的选择在基于稀疏表示的方法中起着关键作用。它可以是预定义的字典,也可以通过优化过程来学习。此外,可以使用适当的核函数将字典学习过程扩展到非线性设置,以便处理非线性结构化数据。在此框架中,内核功能的选择也是关键的一步。多核学习是解决此问题的一种有吸引力的策略。本文在基于核稀疏表示的分类框架下,提出了一种字典算法与多核函数同时学习的迭代算法。一组基本函数的加权和被视为多核函数,其中权重被优化,从而使稀疏编码数据的重构误差最小。在我们提出的算法中,稀疏编码,字典学习和多核学习过程分三个步骤执行。考虑基于稀疏表示的分类器的两个不同结构,即分布式和集体,执行优化过程。我们的实验结果表明,该算法优于其他现有的基于稀疏编码的算法。这些结果还证实,在训练示例数量有限的情况下,集体设置可以带来更好的结果。另一方面,当有足够的训练样本时,分布设置更合适。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号