首页> 外文期刊>IEEE Transactions on Image Processing >Learning Latent Low-Rank and Sparse Embedding for Robust Image Feature Extraction
【24h】

Learning Latent Low-Rank and Sparse Embedding for Robust Image Feature Extraction

机译:用于鲁棒图像特征提取的学习潜伏的低级和稀疏嵌入

获取原文
获取原文并翻译 | 示例

摘要

To defy the curse of dimensionality, the inputs are always projected from the original high-dimensional space into the target low-dimension space for feature extraction. However, due to the existence of noise and outliers, the feature extraction task for corrupted data is still a challenging problem. Recently, a robust method called low rank embedding (LRE) was proposed. Despite the success of LRE in experimental studies, it also has many disadvantages: 1) The learned projection cannot quantitatively interpret the importance of features. 2) LRE does not perform data reconstruction so that the features may not be capable of holding the main energy of the original "clean" data. 3) LRE explicitly transforms error into the target space. 4) LRE is an unsupervised method, which is only suitable for unsupervised scenarios. To address these problems, in this paper, we propose a novel method to exploit the latent discriminative features. In particular, we first utilize an orthogonal matrix to hold the main energy of the original data. Next, we introduce an l(2,1)-norm term to encourage the features to be more compact, discriminative and interpretable. Then, we enforce a columnwise l(2,1)-norm constraint on an error component to resist noise. Finally, we integrate a classification loss term into the objective function to fit supervised scenarios. Our method performs better than several state-of-the-art methods in terms of effectiveness and robustness, as demonstrated on six publicly available datasets.
机译:为了挑剔维度的诅咒,输入总是从原始高维空间投射到目标低维空间中,以进行特征提取。但是,由于噪声和异常值的存在,损坏数据的特征提取任务仍然是一个具有挑战性的问题。最近,提出了一种称为低级别嵌入(LRE)的强大方法。尽管在实验研究中取得了成功,但它也有很多缺点:1)学习的投影不能定量地解释特征的重要性。 2)LRE不执行数据重建,使得特征可能无法保持原始“清洁”数据的主能。 3)LRE明确地将错误转换为目标空间。 4)LRE是一种无人监督的方法,仅适用于无监督的情景。为了解决这些问题,在本文中,我们提出了一种新的方法来利用潜在歧视特征。特别地,我们首先利用正交矩阵来保持原始数据的主能量。接下来,我们介绍了L(2,1)-NORM术语,以鼓励特征更紧凑,辨别和可解释。然后,我们强制执行函数L(2,1)-norm约束以抵御噪声。最后,我们将分类损失术语整合到目标函数中以适应受监督的情况。我们的方法在有效性和稳健性方面比若干最先进的方法更好,如六个公开可用的数据集所示。

著录项

  • 来源
    《IEEE Transactions on Image Processing》 |2020年第2020期|2094-2107|共14页
  • 作者单位

    Nanjing Univ Sci & Technol Sch Comp Sci & Engn Nanjing 210094 Jiangsu Peoples R China|Southwest Univ Sci & Technol Sch Natl Def Sci & Technol Mianyang 621010 Sichuan Peoples R China;

    Nanjing Univ Sci & Technol Sch Comp Sci & Engn Nanjing 210094 Jiangsu Peoples R China;

    Southwest Univ Sci & Technol Sch Informat Engn Mianyang 621010 Sichuan Peoples R China;

    Nanjing Univ Sci & Technol Sch Comp Sci & Engn Nanjing 210094 Jiangsu Peoples R China;

    Nanjing Univ Sci & Technol Sch Comp Sci & Engn Nanjing 210094 Jiangsu Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Subspace learning; feature extraction; low-rank embedding; l(2,1)-norm; face recognition;

    机译:子空间学习;特征提取;低级别嵌入;L(2,1)-norm;面部识别;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号