...
首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Twin-Incoherent Self-Expressive Locality-Adaptive Latent Dictionary Pair Learning for Classification
【24h】

Twin-Incoherent Self-Expressive Locality-Adaptive Latent Dictionary Pair Learning for Classification

机译:对分类进行分类的双相不一致的自我达视位置 - 自适应潜在词典对学习

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The projective dictionary pair learning (DPL) model jointly seeks a synthesis dictionary and an analysis dictionary by extracting the block-diagonal coefficients with an incoherence-constrained analysis dictionary. However, DPL fails to discover the underlying subspaces and salient features at the same time, and it cannot encode the neighborhood information of the embedded coding coefficients, especially adaptively. In addition, although the data can be well reconstructed via the minimization of the reconstruction error, useful distinguishing salient feature information may be lost and incorporated into the noise term. In this article, we propose a novel self-expressive adaptive locality-preserving framework: twin-incoherent self-expressive latent DPL (SLatDPL). To capture the salient features from the samples, SLatDPL minimizes a latent reconstruction error by integrating the coefficient learning and salient feature extraction into a unified model, which can also be used to simultaneously discover the underlying subspaces and salient features. To make the coefficients block diagonal and ensure that the salient features are discriminative, our SLatDPL regularizes them by imposing a twin-incoherence constraint. Moreover, SLatDPL utilizes a self-expressive adaptive weighting strategy that uses normalized block-diagonal coefficients to preserve the locality of the codes and salient features. SLatDPL can use the class-specific reconstruction residual to handle new data directly. Extensive simulations on several public databases demonstrate the satisfactory performance of our SLatDPL compared with related methods.
机译:投影词典对学习(DPL)模型通过提取与不同一约束的分析词典提取块对角线系数来共同寻求合成词典和分析词典。然而,DPL无法同时发现底层子空间和突出特征,并且不能编码嵌入的编码系数的邻域信息,尤其是自适应。另外,尽管通过重建误差的最小化可以很好地重建数据,但是有用的区分突出特征信息可能丢失并结合到噪声项中。在本文中,我们提出了一种新颖的自我表现自适应位置保存框架:双相不全的自我表达潜在DPL(SLATDPL)。为了捕获来自样本的突出特征,SLATDPL通过将系数学习和突出特征提取集成到统一模型中,最小化潜在的重建误差,该统一模型也可用于同时发现底层子空间和突出特征。为了使系数块对角线并确保突出特征是歧视性的,我们的SLATDPL通过施加双不连贯的约束来规范它们。此外,SLATDPL利用自表现自适应加权策略,其使用归一化块对角线系数来保留代码的局部性和突出特征。 SLATDPL可以使用特定于类的重建残差直接处理新数据。关于若干公共数据库的广泛模拟表明,与相关方法相比,我们的SlatDPL的令人满意的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号