The presented work studies an application of a technique known as a semismooth Newton (SSN) method to accelerate the convergence of distributed quadratic programming LASSO (DQP-LASSO) - a consensus-based distributed sparse linear regression algorithm. The DQP-LASSO algorithm exploits an alternating directions method of multipliers (ADMM) algorithm to reduce a global LASSO problem to a series of local (per agent) LASSO optimizations, which outcomes are then appropriately combined. The SSN algorithm enjoys superlinear convergence and thus permits implementing these local optimizations more efficiently. Yet in some cases SSN might experience convergence issues. Here it is shown that the ADMM-inherent regularization also provides sufficient regularization to stabilize the SSN algorithm, thus ensuring a stable convergence of the whole scheme. Additionally, the structure of the SSN algorithm also permits an adaptive implementation of a distributed sparse regression. This allows for an estimation of time-varying sparse vectors, as well as leverages storage requirements for processing streams of data.
展开▼