...
首页> 外文期刊>Machine Learning >MODES: model-based optimization on distributed embedded systems
【24h】

MODES: model-based optimization on distributed embedded systems

机译:模式:分布式嵌入式系统基于模型的优化

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The predictive performance of a machine learning model highly depends on the corresponding hyper-parameter setting. Hence, hyper-parameter tuning is often indispensable. Normally such tuning requires the dedicated machine learning model to be trained and evaluated on centralized data to obtain a performance estimate. However, in a distributed machine learning scenario, it is not always possible to collect all the data from all nodes due to privacy concerns or storage limitations. Moreover, if data has to be transferred through low bandwidth connections it reduces the time available for tuning. Model-Based Optimization (MBO) is one state-of-the-art method for tuning hyper-parameters but the application on distributed machine learning models or federated learning lacks research. This work proposes a framework MODES that allows to deploy MBO on resource-constrained distributed embedded systems. Each node trains an individual model based on its local data. The goal is to optimize the combined prediction accuracy. The presented framework offers two optimization modes: (1) MODES-B considers the whole ensemble as a single black box and optimizes the hyper-parameters of each individual model jointly, and (2) MODES-I considers all models as clones of the same black box which allows it to efficiently parallelize the optimization in a distributed setting. We evaluate MODES by conducting experiments on the optimization for the hyper-parameters of a random forest and a multi-layer perceptron. The experimental results demonstrate that, with an improvement in terms of mean accuracy (MODES-B), run-time efficiency (MODES-I), and statistical stability for both modes, MODES outperforms the baseline, i.e., carry out tuning with MBO on each node individually with its local sub-data set.
机译:机器学习模型的预测性能高度取决于相应的超参数设置。因此,超参数调谐通常是必不可少的。通常,这种调整需要培训专用机器学习模型,并在集中数据上进行培训,以获得性能估计。然而,在分布式机器学习场景中,由于隐私问题或存储限制,并不总是可以收集来自所有节点的所有数据。此外,如果必须通过低带宽连接传输数据,它会降低调谐的时间。基于模型的优化(MBO)是用于调整超参数的一种最新方法,但分布式机器学习模型或联合学习的应用程序缺乏研究。这项工作提出了一种框架模式,允许在资源受限的分布式嵌入式系统上部署MBO。每个节点都会根据其本地数据列举单个模型。目标是优化组合的预测精度。呈现的框架提供了两种优化模式:(1)MODES-B将整个集合作为单个黑匣子,并共同优化每个单独模型的超参数,(2)模式 - 我认为所有型号都是相同的克隆黑匣子允许它有效地并行分布式设置中的优化。我们通过对随机林的超参数进行实验来评估模式,以及多层的Perceptron。实验结果表明,随着平均精度(MODES-B),运行时效率(MODES-I)的改善,以及两种模式的统计稳定性,模式优于基线,即使用MBO进行调谐每个节点单独使用其本地子数据集。

著录项

  • 来源
    《Machine Learning》 |2021年第6期|1527-1547|共21页
  • 作者单位

    TU Dortmund Univ Dept Comp Sci Dortmund Germany;

    Baidu Inc Big Data Lab Beijing Peoples R China|Univ Cent Florida Dept Elect & Comp Engn Orlando FL 32816 USA;

    TU Dortmund Univ Dept Stat Dortmund Germany;

    TU Dortmund Univ Dept Comp Sci Dortmund Germany;

    TU Dortmund Univ Dept Stat Dortmund Germany;

    Baidu Inc Big Data Lab Beijing Peoples R China;

    TU Dortmund Univ Dept Comp Sci Dortmund Germany;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号