【24h】

Federated Learning of Deep Neural Decision Forests

机译:深层神经决策林的联邦学习

获取原文

摘要

Modern technical products have access to a huge amount of data and by utilizing machine learning algorithms this data can be used to improve usability and performance of the products. However, the data is likely to be large in quantity and privacy sensitive, which excludes the possibility of sending and storing all the data centrally. This in turn makes it difficult to train global machine learning models on the combined data of different devices. A decentralized approach known as federated learning solves this problem by letting devices, or clients, update a global model using their own data and only sending changes of the global model, which means that they do not need to communicate privacy sensitive data. Deep neural decision forests (DNDF), inspired by the versatile algorithm random forests, combine the divide-and-conquer principle together with the property representation learning. In this paper we further develop the concept of DNDF to be more suited for the framework of federated learning. By parameterizing the probability distributions in the prediction nodes of the forest, and include all trees of the forest in the loss function, a gradient of the whole forest can be computed which some/several federated learning algorithms utilize. We demonstrate the inclusion of DNDF in federated learning by an empirical experiment with both homogeneous and heterogeneous data and baseline it against a convolutional neural network with the same architecture as the DNDF. Experimental results show that the modified DNDF, consisting of three to five decision trees, outperform the baseline convolutional neural network.
机译:现代技术产品可访问大量数据,并通过利用机器学习算法,该数据可用于提高产品的可用性和性能。但是,数据的数量和隐私敏感可能很大,这排除了在集中发送和存储所有数据的可能性。这反过来又使得难以在不同设备的组合数据上培训全局机器学习模型。被称为联合学习的分散方法通过允许设备或客户端使用自己的数据和仅发送全局模型的更改来解决全局模型来解决此问题,这意味着它们不需要传达隐私敏感数据。深度神经决定森林(DNDF),受到多功能算法随机森林的启发,将鸿沟和征服原则与属性代表学习相结合。在本文中,我们进一步发展了DNDF的概念,更适合联合学习的框架。通过参数化森林预测节点中的概率分布,并包括损失函数中的所有树木,可以计算整个森林的梯度,其中一些/几个联合学习算法利用。我们展示了DNDF在联合学习中,通过具有同类和异构数据的实证实验,并将其与卷积神经网络的基线为基线,与DNDF相同的架构。实验结果表明,改进的DNDF,由三到五个决策树组成,优于基线卷积神经网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号