首页> 外文会议>International Conference on Pattern Recognition >Robust Projective Low-Rank and Sparse Representation by Robust Dictionary Learning
【24h】

Robust Projective Low-Rank and Sparse Representation by Robust Dictionary Learning

机译:鲁棒词典学习的鲁棒投影低秩和稀疏表示

获取原文

摘要

In this paper, we discuss the robust factorization based robust dictionary learning problem for data representation. A Robust Projective Low-Rank and Sparse Representation model (R-PLSR) is technically proposed. Our R-PLSR model integrates the L1-norm based robust factorization and robust low-rank & sparse representation by robust dictionary learning into a unified framework. Specifically, R-PLSR performs the joint low-rank and sparse representation over the informative low-dimensional representations by robust sparse factorization so that the results are more accurate. To make the factorization and representation procedures robust to noise and outliers, R-PLSR imposes the sparse L2, 1-norm jointly on the reconstruction errors based on the factorization and dictionary learning. Note that L2, 1-norm can also minimize the reconstruction error as much as possible, since the L2, 1-norm theoretically tends to force many rows of the reconstruction error matrix to be zeros. The Nuclear-norm and L1-norm are jointly used on the representation coefficients so that salient representations can be obtained. Extensive results on several image datasets show that our R-PLSR formulation can deliver superior performance over other state-of-the-arts.
机译:在本文中,我们讨论了基于鲁棒分解的基于鲁棒字典学习的数据表示问题。技术上提出了一种鲁棒的投影低秩和稀疏表示模型(R-PLSR)。我们的R-PLSR模型通过健壮的字典学习将基于L1范数的健壮分解和健壮的低秩稀疏表示集成到一个统一的框架中。具体而言,R-PLSR通过鲁棒的稀疏分解对信息量大的低维表示执行联合低秩和稀疏表示,从而使结果更准确。为了使分解和表示过程对噪声和离群值具有鲁棒性,R-PLSR在分解和字典学习的基础上,对重构误差共同施加了稀疏的L2,1-范数。注意,L2,1-范数还可以使重构误差尽可能地小,因为理论上,L2,1-范数趋向于迫使重构误差矩阵的许多行为零。将核范数和L1-范数共同用于表示系数,以便可以获得显着的表示。在多个图像数据集上的大量结果表明,我们的R-PLSR配方可以提供优于其他最新技术的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号