首页> 外文期刊>IEEE Transactions on Information Theory >Sparse Subspace Clustering via Two-Step Reweighted L1-Minimization: Algorithm and Provable Neighbor Recovery Rates
【24h】

Sparse Subspace Clustering via Two-Step Reweighted L1-Minimization: Algorithm and Provable Neighbor Recovery Rates

机译:稀疏的子空间聚类通过两步重新重量L1 - 最小化:算法和可提供邻居恢复速率

获取原文
获取原文并翻译 | 示例

摘要

Sparse subspace clustering (SSC) relies on sparse regression for accurate neighbor identification. Inspired by recent progress in compressive sensing, this paper proposes a new sparse regression scheme for SSC via two-step reweighted $ell _{1} $ -minimization, which also generalizes a two-step $ell _{1} $ -minimization algorithm introduced by E. J. Candès et al. in [ The Annals of Statistics , vol. 42, no. 2, pp. 669–699, 2014] without incurring extra algorithmic complexity. To fully exploit the prior information offered by the computed sparse representation vector in the first step, our approach places a weight on each component of the regression vector, and solves a weighted LASSO in the second step. We propose a data weighting rule suitable for enhancing neighbor identification accuracy. Then, under the formulation of the dual problem of weighted LASSO, we study in depth the theoretical neighbor recovery rates of the proposed scheme. Specifically, an interesting connection between the locations of nonzeros of the optimal sparse solution to the weighted LASSO and the indexes of the active constraints of the dual problem is established. Afterwards, under the semi-random model, analytic probability lower/upper bounds for various neighbor recovery events are derived. Our analytic results confirm that, with the aid of data weighting and if the prior neighbor information is accurate enough, the proposed scheme with a higher probability can produce many correct neighbors and few incorrect neighbors as compared to the solution without data weighting. Computer simulations are provided to validate our analytic study and evidence the effectiveness of the proposed approach.
机译:稀疏子空间聚类(SSC)依赖于稀疏回归以进行准确邻识别。灵感来自最近的压缩感测进展,本文通过两步重新传递<内联公式XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns: xlink =“http://www.w3.org/1999/xlink”> $ el _ {1} $ --minization ,这也概括了两步<内联公式xmlns:mml =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/ xlink“> $ ell _ _ {1} $ - ejcandès等al。在[统计数据,vol。 42,不。 2,PP。669-699,2014]没有产生额外的算法复杂性。为了在第一步中完全利用由计算的稀疏表示向量提供的先前信息,我们的方法将重量放在回归矢量的每个组件上,并在第二步中解决加权套索。我们提出了一种适用于增强邻居识别准确性的数据加权规则。然后,在重量套索的双重问题的制定下,我们深入研究了所提出的方案的理论邻居恢复率。具体地,建立了对加权套索的最佳稀疏解决方案的非系统位置与双问题的激活约束的索引之间的有趣联系。之后,在半随机模型下,导出了各种邻居恢复事件的分析概率下限/上限。我们的分析结果证实,借助数据加权,如果先前的邻居信息足够准确,则概率较高的提出方案可以产生许多正确的邻居,并且与没有数据加权的解决方案相比,很少的错误邻居。提供计算机仿真以验证我们的分析研究和证据提出的方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号