首页> 外文会议>Symposium of the German Association for Pattern Recognition >The Group-Lasso: l_(1,∞) Regularization versus l_(1,2) Regularization
【24h】

The Group-Lasso: l_(1,∞) Regularization versus l_(1,2) Regularization

机译:Group-Lasso:L_(1,∞)正则化与L_(1,2)正则化

获取原文

摘要

The l_(1,∞) norm and the l_(1,2) norm are well known tools for joint regularization in Group-Lasso methods. While the l_(1,2) version has been studied in detail, there are still open questions regarding the uniqueness of solutions and the efficiency of algorithms for the l_(1,∞) variant. For the latter, we characterize the conditions for uniqueness of solutions, we present a simple test for uniqueness, and we derive a highly efficient active set algorithm that can deal with input dimensions in the millions. We compare both variants of the Group-Lasso for the two most common application scenarios of the Group-Lasso, one is to obtain sparsity on the level of groups in "standard" prediction problems, the second one is multi-task learning where the aim is to solve many learning problems in parallel which are coupled via the Group-Lasso constraint. We show that both version perform quite similar in "standard" applications. However, a very clear distinction between the variants occurs in multi-task settings where the l_(1,2) version consistently outperforms the l_(1,∞) counterpart in terms of prediction accuracy.
机译:L_(1,∞)规范和L_(1,2)规范是众所周知的基团-rasso方法中的联合正则化工具。虽然已经详细研究了L_(1,2)版本,但仍然有关于解决方案唯一性的开放性问题以及L_(1,∞)变体的算法效率。对于后者,我们对解决方案的独特性的条件表征,我们对唯一性提出了一个简单的测试,我们得出了一种高效的活动集算法,可以处理数百万中的输入尺寸。我们比较套件的两个变体为卢赛索的两个最常见的应用方案,一个是在“标准”预测问题中获得群体的稀疏性,第二个是瞄准的多任务学习是通过基团-Lasso约束来解决并行的许多学习问题。我们显示两个版本在“标准”应用程序中执行非常相似。然而,变体之间的非常明确的区别在多任务设置中发生,其中L_(1,2)版本在预测精度方面始终如一地优于L_(1,∞)对应物。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号