首页> 外文期刊>International journal of machine learning and cybernetics >Flat random forest: a new ensemble learning method towards better training efficiency and adaptive model size to deep forest
【24h】

Flat random forest: a new ensemble learning method towards better training efficiency and adaptive model size to deep forest

机译:平面随机森林:一种新的培训效率和自适应模型大小到深林的新集合学习方法

获取原文
获取原文并翻译 | 示例
           

摘要

The known deficiencies of deep neural networks include inferior training efficiency, weak parallelization capability, too many hyper-parameters etc. To address these issues, some researchers presented deep forest, a special deep learning model, which achieves some significant improvements but remain poor training efficiency, inflexible model size and weak interpretability. This paper endeavors to solve the issues in a new way. Firstly, deep forest is extended to the densely connected deep forest to enhance the prediction accuracy. Secondly, to perform parallel training with adaptive model size, the flat random forest is proposed by achieving the balance between the width and depth of densely connected deep forest. Finally, two core algorithms are respectively presented for the forward output weights computation and output weights updating. The experimental results show, compared with deep forest, the proposed flat random forest acquires competitive prediction accuracy, higher training efficiency, less hyper-parameters and adaptive model size.
机译:深度神经网络的已知缺陷包括较差的培训效率,弱并行化能力,太多的超参数等来解决这些问题,一些研究人员介绍了一个特殊的深度学习模式,这实现了一些显着的改进,但仍然较差的培训效率,不灵活的模型规模和薄弱的解释性。本文努力以新的方式解决问题。首先,深森林扩展到密集连接的深林中以增强预测准确性。其次,为了通过自适应模型尺寸执行并行训练,通过实现密集连接的深林的宽度和深度之间的平衡来提出平坦的随机林。最后,分别为前进输出权重计算和输出权重更新的两个核心算法。实验结果表明,与深林相比,拟议的扁平随机森林获得了竞争预测准确性,培训效率更高,超参数和自适应模型尺寸。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号