首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Asynchronous Doubly Stochastic Group Regularized Learning
【24h】

Asynchronous Doubly Stochastic Group Regularized Learning

机译:异步双随机群正则学习

获取原文
       

摘要

Group regularized learning problems (such as group Lasso) are important in machine learning. The asynchronous parallel stochastic optimization algorithms have received huge attentions recently as handling large scale problems. However, existing asynchronous stochastic algorithms for solving the group regularized learning problems are not scalable enough simultaneously in sample size and feature dimensionality. To address this challenging problem, in this paper, we propose a novel asynchronous doubly stochastic proximal gradient algorithm with variance reduction (AsyDSPG+). To the best of our knowledge, AsyDSPG+ is the first asynchronous doubly stochastic proximal gradient algorithm, which can scale well with the large sample size and high feature dimensionality simultaneously. More importantly, we provide a comprehensive convergence guarantee to AsyDSPG+. The experimental results on various large-scale real-world datasets not only confirm the fast convergence of our new method, but also show that AsyDSPG+ scales better than the existing algorithms with the sample size and dimension simultaneously.
机译:小组正则化学习问题(例如小组Lasso)在机器学习中很重要。异步并行随机优化算法最近在处理大规模问题时受到了广泛的关注。但是,用于解决组正则化学习问题的现有异步随机算法在样本大小和特征维数方面无法同时扩展。为了解决这个具有挑战性的问题,在本文中,我们提出了一种新颖的具有方差减少的异步双随机近端梯度算法(AsyDSPG +)。据我们所知,AsyDSPG +是第一个异步双随机近端梯度算法,可以同时在大样本量和高特征维数下很好地扩展。更重要的是,我们为AsyDSPG +提供了全面的融合保证。在各种大规模真实世界数据集上的实验结果不仅证实了我们新方法的快速收敛性,而且还表明,AsyDSPG +在同时具有样本大小和维数的情况下,可扩展性优于现有算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号