...
首页> 外文期刊>Neurocomputing >Convergence analysis of accelerated proximal extra-gradient method with applications
【24h】

Convergence analysis of accelerated proximal extra-gradient method with applications

机译:加速近端超梯度法的收敛性分析及其应用

获取原文
获取原文并翻译 | 示例
           

摘要

Proximal algorithms are popular class of methods for handling sparsity structure in the datasets due to their low iteration costs and faster convergence. In this paper, we consider the framework of the sum of two convex functions, one of which is a smooth function with a Lipschitz gradient, while the other may be a non-smooth function. The usages of such non-smooth functions for identifying complex sparsity-structures in datasets in form of non-smooth regularizers has been an active research direction in the recent past. In this paper, we present the convergence analysis for the extragradient-based fixed-point method with an inertial component, based on which recently a new accelerated proximal extragradient algorithm is designed. In addition, extending the application areas of this algorithm, we applied it to solve (i) the logistic regression problem with complex l(1)-based penalties, namely, overlapping group lasso and fused lasso frameworks, and (ii) a recently proposed structurally-regularized learning problem for representation selection where the objective function consists of a reconstruction error and structured regularizers as combination of group sparsity regularizer, diversity regularizer, and locality-sensitivity regularizer. With the help of extensive experiments on several publicly available real-world datasets, the efficacy of the inertial-based extragradient methods has been demonstrated for solving the extended lasso and representation selection problems of machine learning. (C) 2020 Elsevier B.V. All rights reserved.
机译:近邻算法因其低迭代成本和更快的收敛性而成为处理数据集中稀疏结构的流行方法。在本文中,我们考虑了两个凸函数之和的框架,其中一个是具有Lipschitz梯度的平滑函数,而另一个可能是非平滑函数。近年来,使用这种非平滑函数来识别非平滑正则化形式的数据集中的稀疏结构是一个活跃的研究方向。在本文中,我们提出了基于惯性分量的基于梯度的定点方法的收敛性分析,在此基础上,最近设计了一种新的加速近端梯度算法。此外,扩展了该算法的应用领域,我们将其用于解决(i)具有复杂的基于l(1)的惩罚的逻辑回归问题,即重叠的组套索和融合的套索框架,以及(ii)最近提出的用于表示选择的结构正规化学习问题,其中目标函数由重构误差和结构化正规化器组成,这些正规化器是组稀疏性正规化器,多样性正规化器和局部敏感度正规化器的组合。借助在多个可公开获得的现实世界数据集上进行的广泛实验,已证明了基于惯性的超梯度方法解决了机器学习的扩展套索和表示选择问题的功效。 (C)2020 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号