首页> 外文会议>International Joint Conference on Neural Networks >Cooperative learning: Decentralized data neural network
【24h】

Cooperative learning: Decentralized data neural network

机译:合作学习:分散数据神经网络

获取原文

摘要

Researchers often wish to study data stored in separate locations, such as when several research entities wish to make inferences from their combined data. The most common solution is to centralize the data in one location. However, certain types of data can be difficult to transfer between entities due to legal or practical reasons. This makes centralizing these types of data problematic. A possible solution is the use of methods that learn from data without moving them to a central location: decentralized algorithms. Only a few algorithms emphasizing that property are known to us, and even fewer are used in the biomedical domain. In this paper, we propose a decentralized neural network that allows data analysis without transferring the data from the sites that host them. Instead, this method only transfers the gradients (or their parts) calculated via back-propagation. Our approach allows us to learn a classifier even when class examples are located at different sites, enabling privacy-aware collaboration across groups with specific research interests. We validate the method in several experiments to test stability, compare performance to a network trained on the centralized data, and investigate the ability to reduce size of data transfer. Our experiments on simulated, benchmark, and neuroimaging addiction data provide strong evidence that the proposed model works as effectively as a pooled centralized model.
机译:研究人员经常希望研究存储在单独的位置的数据,例如几个研究实体希望从他们的组合数据中推断出来。最常见的解决方案是将数据集中在一个位置。然而,由于合法或实际原因,某些类型的数据可能难以在实体之间传输。这使得集中化这些类型的数据有问题。可能的解决方案是使用从数据中学习的方法而不将它们移动到中心位置:分散算法。只有少数算法强调该属性是我们的,甚至在生物医学域中使用甚至更少。在本文中,我们提出了一种分散的神经网络,允许数据分析而不从托管它们的站点传输数据。相反,该方法仅传输通过反向传播计算的渐变(或其部分)。我们的方法允许我们即使在不同站点位于类示例时,我们也可以了解分类器,使具有特定研究兴趣的组的隐私感知协作。我们在几个实验中验证了测试稳定性的方法,将性能与在集中数据上培训的网络中进行比较,并调查降低数据传输大小的能力。我们对模拟,基准和神经影像成瘾数据的实验提供了强有力的证据表明所提出的模型作为汇集的集中模型有效地工作。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号