【24h】

Fissionable Deep Neural Network

机译:裂变神经网络

获取原文

摘要

Model combination nearly always improves the performance of machine learning methods. Averaging the predictions of multi-model further decreases the error rate. In order to obtain multi high quality models more quickly, this article proposes a novel deep network architecture called 'Fissionable Deep Neural Network', abbreviated as FDNN. Instead of just adjusting the weights in a fixed topology network, FDNN contains multi branches with shared parameters and multi Softmax layers. During training, the model divides until to be multi models. FDNN not only can reduce computational cost, but also overcome the interference of convergence between branches and give an opportunity for the branches falling into a poor local optimal solution to re-learn. It improves the performance of neural network on supervised learning which is demonstrated on MNIST and CIFAR-10 datasets.
机译:模型组合几乎总是可以提高机器学习方法的性能。平均多模型的预测可以进一步降低错误率。为了更快地获得多个高质量模型,本文提出了一种新颖的深度网络体系结构,称为“裂变深度神经网络”,简称为FDNN。 FDNN不仅仅是在固定拓扑网络中调整权重,还包含具有共享参数的多个分支和多个Softmax层。在训练期间,模型将划分为多个模型。 FDNN不仅可以降低计算成本,而且可以克服分支之间收敛的干扰,并为分支陷入陷入较差的局部最优解进行重新学习提供了机会。 MNIST和CIFAR-10数据集证明了它改善了监督学习中神经网络的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号