首页> 外文会议>IEEE Annual Consumer Communications and Networking Conference >Mitigating Data Poisoning Attacks On a Federated Learning-Edge Computing Network
【24h】

Mitigating Data Poisoning Attacks On a Federated Learning-Edge Computing Network

机译:缓解数据中毒对联邦学习 - 边缘计算网络的攻击

获取原文

摘要

Edge Computing (EC) has seen a continuous rise in its popularity as it provides a solution to the latency and communication issues associated with edge devices transferring data to remote servers. EC achieves this by bringing the cloud closer to edge devices. Even though EC does an excellent job of solving the latency and communication issues, it does not solve the privacy issues associated with users transferring personal data to the nearby edge server. Federated Learning (FL) is an approach that was introduced to solve the privacy issues associated with data transfers to distant servers. FL attempts to resolve this issue by bringing the code to the data, which goes against the traditional way of sending the data to remote servers. In FL, the data stays on the source device, and a Machine Learning (ML) model used to train the local data is brought to the end device instead. End devices train the ML model using local data and then send the model updates back to the server for aggregation. However, this process of asking random devices to train a model using its local data has potential risks such as a participant poisoning the model using malicious data for training to produce bogus parameters. In this paper, an approach to mitigate data poisoning attacks in a federated learning setting is investigated. The application of the approach is highlighted, and the practical and secure nature of this approach is illustrated as well using numerical results.
机译:边缘计算(EC)已经持续上升,因为它提供了与与边缘设备相关联的延迟和通信问题的解决方案,将数据传输到远程服务器。 EC通过将云更靠近边缘设备来实现这一目标。尽管EC做出了解决延迟和沟通问题的优秀工作,但它并没有解决与将个人数据传输到附近边缘服务器的隐私问题。联合学习(FL)是一种介绍解决与数据转移到遥远服务器相关的隐私问题的方法。通过将代码带到数据来违背将数据发送到远程服务器的传统方式来尝试解决此问题。在FL中,数据停留在源设备上,而且用于训练本地数据的机器学习(ML)模型被带到最终设备。终端设备使用本地数据培训ML模型,然后将模型更新发送回服务器以进行聚合。但是,这种要求随机设备使用其本地数据训练模型的过程具有潜在的风险,例如参与者使用恶意数据中毒,用于使用恶意数据进行培训以产生虚假参数。在本文中,研究了一种缓解联邦学习环境中的数据中毒攻击的方法。突出显示该方法的应用,并使用数值结果说明了这种方法的实际和安全性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号