首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Direct Error-Driven Learning for Deep Neural Networks With Applications to Big Data
【24h】

Direct Error-Driven Learning for Deep Neural Networks With Applications to Big Data

机译:对具有大数据的深度神经网络的直接错误驱动学习

获取原文
获取原文并翻译 | 示例

摘要

In this brief, heterogeneity and noise in big data are shown to increase the generalization error for a traditional learning regime utilized for deep neural networks (deep NNs). To reduce this error, while overcoming the issue of vanishing gradients, a direct error-driven learning (EDL) scheme is proposed. First, to reduce the impact of heterogeneity and data noise, the concept of a neighborhood is introduced. Using this neighborhood, an approximation of generalization error is obtained and an overall error, comprised of learning and the approximate generalization errors, is defined. A novel NN weight-tuning law is obtained through a layer-wise performance measure enabling the direct use of overall error for learning. Additional constraints are introduced into the layer-wise performance measure to guide and improve the learning process in the presence of noisy dimensions. The proposed direct EDL scheme effectively addresses the issue of heterogeneity and noise while mitigating vanishing gradients and noisy dimensions. A comprehensive simulation study is presented where the proposed approach is shown to mitigate the vanishing gradient problem while improving generalization by 6%.
机译:在此简述中,大数据中的异质性和噪声被示出为增加用于深度神经网络(深NNS)的传统学习制度的泛化误差。为了减少此错误,同时克服消失梯度问题,提出了直接的错误驱动学习(EDL)方案。首先,为了减少异质性和数据噪声的影响,介绍了邻域的概念。使用该邻域,获得了常规错误的近似值,并且定义了由学习和近似概括误差的总体错误。通过层展性的性能测量来获得新的NN体重调整法,使直接使用总体误差来学习。在存在嘈杂的尺寸的情况下引导和改善学习过程的层面性能测量来引入附加限制。拟议的直接EDL方案有效地解决了异质性和噪音的问题,同时减轻了消失的梯度和噪声尺寸。提出了一个综合模拟研究,其中所提出的方法被证明在提高泛化的同时降低消失的梯度问题,同时提高了6%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号