首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Learning Compact Neural Networks with Regularization
【24h】

Learning Compact Neural Networks with Regularization

机译:通过正则化学习紧凑型神经网络

获取原文
           

摘要

Proper regularization is critical for speeding up training, improving generalization performance, and learning compact models that are cost efficient. We propose and analyze regularized gradient descent algorithms for learning shallow neural networks. Our framework is general and covers weight-sharing (convolutional networks), sparsity (network pruning), and low-rank constraints among others. We first introduce covering dimension to quantify the complexity of the constraint set and provide insights on the generalization properties. Then, we show that proposed algorithms become well-behaved and local linear convergence occurs once the amount of data exceeds the covering dimension. Overall, our results demonstrate that near-optimal sample complexity is sufficient for efficient learning and illustrate how regularization can be beneficial to learn over-parameterized networks.
机译:正确的正则化对于加快培训速度,改善泛化性能以及学习具有成本效益的紧凑模型至关重要。我们提出并分析了用于学习浅层神经网络的正则化梯度下降算法。我们的框架比较笼统,涵盖权重共享(卷积网络),稀疏性(网络修剪)和低等级约束。我们首先介绍覆盖维,以量化约束集的复杂度,并提供有关泛化属性的见解。然后,我们表明,一旦数据量超过覆盖范围,所提出的算法就表现良好,并且发生局部线性收敛。总体而言,我们的结果表明,接近最优的样本复杂度足以有效学习,并说明了正则化如何有益于学习过度参数化的网络。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号