首页> 外文期刊>IEEE Network: The Magazine of Computer Communications >Federated Unlearning: Guarantee the Right of Clients to Forget
【24h】

Federated Unlearning: Guarantee the Right of Clients to Forget

机译:联合退学习:保证客户端的遗忘权

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

The Right to be Forgotten gives a data owner the right to revoke their data from an entity storing it. In the context of federated learning, the Right to be Forgotten requires that, in addition to the data itself, any influence of the data on the FL model must disappear, a process we call “federated unlearning.” The most straightforward and legitimate way to implement federated unlearning is to remove the revoked data and retrain the FL model from scratch. Yet the computational and time overhead associated with fully retraining FL models can be prohibitively expensive. In this article, we take the first step to comprehensively investigate the way to settle the unlearning paradigm in the context of federated learning. First, we define the problem of efficient federated unlearning, including its challenges and goals, and we identify three common types of federated unlearning requests: class unlearning, client unlearning, and sample unlearning. Based on those challenges and goals, a general pipeline is proposed for federated unlearning for the above three types of requests. We revisit how the training data affects the final FL model performance and thereby empowers the proposed framework with the reverse stochastic gradient ascent (SGA) and elastic weight consolidation (EWC). Various experiments are conducted to verify effectiveness of the proposed method in both aspects of unlearning efficacy and efficiency. We believe the proposed method will perform as an essential component for future machine unlearning systems.
机译:被遗忘权赋予数据所有者从存储数据的实体撤销其数据的权利。在联邦学习的背景下,“被遗忘权”要求,除了数据本身之外,数据对联邦学习模型的任何影响都必须消失,我们称之为“联邦去学习”。实现联合取消学习的最直接、最合法的方法是删除已撤销的数据并从头开始重新训练 FL 模型。然而,与完全重新训练 FL 模型相关的计算和时间开销可能非常昂贵。在本文中,我们迈出了第一步,全面研究了在联邦学习背景下解决去学习范式的方法。首先,我们定义了高效联合学习的问题,包括其挑战和目标,并确定了三种常见的联合取消学习请求类型:类取消学习、客户端取消学习和样本取消学习。基于这些挑战和目标,针对上述三种类型的请求,提出了一个用于联合取消学习的通用管道。我们重新审视了训练数据如何影响最终的 FL 模型性能,从而为所提出的框架提供了反向随机梯度上升 (SGA) 和弹性权重巩固 (EWC)。通过各种实验验证了所提方法在忘却效果和效率方面的有效性。我们相信所提出的方法将成为未来机器学习系统的重要组成部分。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号