首页> 外文期刊>Neurocomputing >Adaptive sparse dropout: Learning the certainty and uncertainty in deep neural networks
【24h】

Adaptive sparse dropout: Learning the certainty and uncertainty in deep neural networks

机译:自适应稀疏辍学:在深神经网络中学习确定性和不确定性

获取原文
获取原文并翻译 | 示例

摘要

Dropout is an important training method for deep neural networks, because it can help avoid over-fitting. Traditional dropout methods and many extended dropout methods, omit some of the neurons & rsquo; activation values according to the probabilities. These methods calculate the activation probability of neurons using the designed formula, without providing a plausible explanation of the calculation method. This paper proposes an adaptive sparse dropout (AS-Dropout) method for neural network training. The algorithm maps the neurons & rsquo; activation values in a layer to a relative linear range of a sigmoid function, determines the ratio of active neurons by a probability calculation process, and drops most of neurons according to the probabilities. The probability calculation depends on the activation values of the neurons. The selection of active neurons is according to the probabilities. Therefore, AS-Dropout learns both the certainty and uncertainty in deep neural networks. Additionally, since only a small number of neurons are active, AS-Dropout increases the sparsity of the network. We applied AS-Dropout in different neural network structures. When evaluated on MNIST, COIL-100, and Caltech-101 datasets, the experimental results demonstrated that, overall, AS-Dropout substantially outperformed the traditional dropout and some improved dropout methods.CO 2021 Published by Elsevier B.V.
机译:辍学是深度神经网络的重要培训方法,因为它可以帮助避免过度拟合。传统的辍学方法和许多扩展辍学方法,省略了一些神经元’根据概率的激活值。这些方法使用设计的公式计算神经元的激活概率,而不提供对计算方法的合理说明。本文提出了一种适应性稀疏辍学(AS-Tropout)方法,用于神经网络训练。该算法映射神经元’层中的激活值与六样突变函数的相对线性范围,通过概率计算过程确定活性神经元的比率,并根据概率下降大部分神经元。概率计算取决于神经元的激活值。选择性神经元的选择是根据概率。因此,由于深神经网络中的确定性和不确定性。另外,由于只有少量神经元是有效的,因此由于仅增加网络的稀疏性。我们在不同的神经网络结构中应用了爆炸。当在MNIST,线圈100和CALTECH-101数据集上进行评估时,实验结果表明,总体而言,由于elsevier b.v出版的传统辍学和一些改进的辍学方法。

著录项

  • 来源
    《Neurocomputing》 |2021年第25期|354-361|共8页
  • 作者

    Chen Yuanyuan; Yi Zhang;

  • 作者单位

    Sichuan Univ Coll Comp Sci Machine Intelligence Lab Chengdu 610065 Peoples R China;

    Sichuan Univ Coll Comp Sci Machine Intelligence Lab Chengdu 610065 Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Deep neural network; Dropout; Network training; Sparsity;

    机译:深神经网络;辍学;网络培训;稀疏性;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号