【24h】

Deep learning regularization in imbalanced data

机译:不平衡数据中的深度学习正则化

获取原文

摘要

Deep neural networks are known to have a large number of parameters which can lead to overfitting. As a result various regularization methods designed to mitigate the model overfitting have become an indispensable part of many neural network architectures. However, it remains unclear which regularization methods are the most effective. In this paper, we examine the impact of regularization on neural network performance in the context of imbalanced data. We consider three main regularization approaches: $L_{1}, L_{2}$, and dropout regularization. Numerical experiments reveal that the $L_{1}$ regularization method can be an effective tool to prevent overfitting in neural network models for imbalanced data. Index Terms-regularization, neural networks, imbalanced data.
机译:众所周知,深度神经网络具有大量可能导致过度拟合的参数。结果,旨在减轻模型过度拟合的各种正则化方法已成为许多神经网络体系结构中必不可少的部分。但是,尚不清楚哪种正则化方法最有效。在本文中,我们研究了在数据不平衡的情况下正则化对神经网络性能的影响。我们考虑三种主要的正则化方法: $ L_ {1},L_ {2} $ ,以及辍学正则化。数值实验表明 $ L_ {1} $ < / tex> 正则化方法可能是防止神经网络模型过度拟合不平衡数据的有效工具。索引词正则化,神经网络,不平衡数据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号