...
首页> 外文期刊>IEEE Transactions on Signal Processing >UVeQFed: Universal Vector Quantization for Federated Learning
【24h】

UVeQFed: Universal Vector Quantization for Federated Learning

机译:UVEQFED:联合学习的通用矢量量化

获取原文
获取原文并翻译 | 示例

摘要

Traditional deep learning models are trained at a centralized server using data samples collected from users. Such data samples often include private information, which the users may not be willing to share. Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their data. FL consists of an iterative procedure, where in each iteration the users train a copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model. A major challenge that arises in this method is the need of each user to repeatedly transmit its learned model over the throughput limited uplink channel. In this work, we tackle this challenge using tools from quantization theory. In particular, we identify the unique characteristics associated with conveying trained models over rate-constrained channels, and propose a suitable quantization scheme for such settings, referred to as universal vector quantization for FL (UVeQFed). We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion. We then theoretically analyze the distortion, showing that it vanishes as the number of users grows. We also characterize how models trained with conventional federated averaging combined with UVeQFed converge to the model which minimizes the loss function. Our numerical results demonstrate the gains of UVeQFed over previously proposed methods in terms of both distortion induced in quantization and accuracy of the resulting aggregated model.
机译:传统的深度学习模型使用来自用户收集的数据示例在集中式服务器上培训。这些数据样本通常包括私人信息,用户可能不愿意分享。联合学习(FL)是一种培训这种学习模型的新兴的方法,而无需用户分享他们的数据。 FL由迭代程序组成,每次迭代中,用户在本地培训学习模型的副本。然后服务器收集各个更新并将它们聚合到全局模型中。在该方法中产生的主要挑战是每个用户需要在吞吐量有限上行链路信道上重复发送其学习模型。在这项工作中,我们使用来自量化理论的工具来解决这一挑战。特别地,我们确定与速率约束通道的传送训练模型相关联的独特特征,并提出了用于这种设置的合适量化方案,称为FL(UVEQFED)的通用矢量量化。我们表明,组合具有FL的通用矢量量化方法产生分散的训练系统,其中训练型模型的压缩仅引起最小失真。然后,理论上我们分析失真,显示它随着用户数量的增长而消失。我们还表征了如何使用传统联合的平均培训的模型与UVEQFED收敛到最小化损耗功能的模型。我们的数值结果证明了先前提出的方法在由所得到的聚合模型的量化和精度诱导的变形方面上提出了先前提出的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号