首页> 外文会议>Algorithmic learning theory >Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach
【24h】

Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach

机译:大规模和高维数据的稀疏学习:随机凹凸优化方法

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we develop a randomized algorithm and theory for learning a sparse model from large-scale and high-dimensional data, which is usually formulated as an empirical risk minimization problem with a sparsity-inducing regularizer. Under the assumption that there exists a (approximately) sparse solution with high classification accuracy, we argue that the dual solution is also sparse or approximately sparse. The fact that both primal and dual solutions are sparse motivates us to develop a randomized approach for a general convex-concave optimization problem. Specifically, the proposed approach combines the strength of random projection with that of sparse learning: it utilizes random projection to reduce the dimensionality, and introduces ℓ_1-norm regularization to alleviate the approximation error caused by random projection. Theoretical analysis shows that under favored conditions, the randomized algorithm can accurately recover the optimal solutions to the convex-concave optimization problem (i.e., recover both the primal and dual solutions).
机译:在本文中,我们开发了一种用于从大规模和高维数据中学习稀疏模型的随机算法和理论,该模型通常被公式化为带有稀疏诱导性正则化器的经验风险最小化问题。在存在(近似)稀疏解且分类精度较高的假设下,我们认为对偶解也是稀疏或近似稀疏的。原始解和对偶解都稀疏的事实促使我们为一般的凸凹优化问题开发一种随机方法。具体而言,该方法将随机投影的强度与稀疏学习的强度相结合:它利用随机投影来降低维数,并引入ℓ_1范数正则化以减轻由随机投影引起的近似误差。理论分析表明,在有利条件下,该随机算法可以准确地恢复凸凹优化问题的最优解(即同时恢复原始解和对偶解)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号