首页> 外文期刊>IEEE Transactions on Signal Processing >Decentralized Sparse Multitask RLS Over Networks
【24h】

Decentralized Sparse Multitask RLS Over Networks

机译:网络上的分散式稀疏多任务RLS

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Distributed adaptive signal processing has attracted much attention in the recent decade owing to its effectiveness in many decentralized real-time applications in networked systems. Because many natural signals are highly sparse with most entries equal to zero, several decentralized sparse adaptive algorithms have been proposed recently. Most of them is focused on the single task estimation problems, in which all nodes receive data associated with the same unknown vector and collaborate to estimate it. However, many applications are inherently multitask oriented and each node has its own unknown vector different from others. The related multitask estimation problem benefits from collaborations among the nodes as neighbor nodes usually share analogous properties and thus similar unknown vectors. In this paper, we study the distributed sparse multitask recursive least squares (RLS) problem over networks. We first propose a decentralized online alternating direction method of multipliers algorithm for the formulated RLS problem. The algorithm is simplified for easy implementation with closed-form computations in each iteration and low storage requirements. Convergence analysis of the algorithm is presented. Moreover, to further reduce the complexity, we propose a decentralized online subgradient method with low computational overhead. We theoretically establish its mean square stability by providing upper bounds for the mean square deviation and the excess mean square error. A related distributed online proximal gradient method is presented and extension to clustered multitask networks is also provided. The effectiveness of the proposed algorithms is corroborated by numerical simulations and an accuracy-complexity tradeoff between the proposed algorithms is highlighted.
机译:由于分布式自适应信号处理在网络系统中许多分散式实时应用中的有效性,最近十年来引起了人们的广泛关注。由于许多自然信号是高度稀疏的,大多数项等于零,因此最近提出了几种分散的稀疏自适应算法。它们中的大多数都集中在单个任务估计问题上,在该问题中,所有节点都接收与相同未知向量关联的数据,并进行协作来估计它。但是,许多应用程序本质上是面向多任务的,并且每个节点都有自己的未知向量,这些向量与其他节点不同。相关的多任务估计问题得益于节点之间的协作,因为邻居节点通常共享相似的属性,因此具有相似的未知向量。在本文中,我们研究了网络上的分布式稀疏多任务递归最小二乘(RLS)问题。对于提出的RLS问题,我们首先提出了一种分散的乘数算法在线交替方向方法。该算法经过简化,易于实现,每次迭代均采用封闭形式的计算,并且存储需求较低。给出了算法的收敛性分析。此外,为了进一步降低复杂度,我们提出了一种具有较低计算开销的分散式在线次梯度方法。从理论上讲,我们通过提供均方差和过量均方误差的上限来建立其均方稳定性。提出了一种相关的分布式在线近端梯度法,并提供了对集群多任务网络的扩展。数值仿真证实了所提算法的有效性,并突出了所提算法之间的精度-复杂度折衷。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号