首页> 外文会议>IEEE Statistical Signal Processing Workshop >Simultaneous Sparsity and Parameter Tying for Deep Learning Using Ordered Weighted ?1Regularization
【24h】

Simultaneous Sparsity and Parameter Tying for Deep Learning Using Ordered Weighted ?1Regularization

机译:使用有序加权进行深度学习的同时稀疏和参数捆绑? 1 正规化

获取原文

摘要

A deep neural network (DNN) usually contains millions of parameters, making both storage and computation extremely expensive. Although this high capacity allows DNNs to learn sophisticated mappings, it also makes them prone to over-fitting. To tackle this issue, we adopt a recently proposed sparsity-inducing regularizer called OWL (ordered weighted ?1, which has proven effective in sparse linear regression with strongly correlated covariates. Unlike the conventional sparsity-inducing regularizers, OWL simultaneously eliminates unimportant variables by setting their weights to zero, while also explicitly identifying correlated groups of variables by tying the corresponding weights to a common value. We evaluate the OWL regularizer on several deep learning benchmarks, showing that it can dramatically compress the network with slight or even no loss on generalization accuracy.
机译:深度神经网络(DNN)通常包含数百万个参数,使存储和计算非常昂贵。虽然这种高容量允许DNN学习复杂的映射,但它也使它们能够容易地过度拟合。为了解决这个问题,我们采用最近提出的稀疏诱导常规例称为猫头鹰(有序加权? 1 ,这已证明在稀疏线性回归中有效,具有强烈的协变量。与传统的稀疏性诱导的常规程序不同,猫头鹰同时通过将其权重设定为零,同时消除不重要的变量,同时还通过将相应的权重与公共值联系到公共值来明确地识别相关变量组。我们在几个深度学习基准上评估OWL规范器,表明它可以大大压缩网络,轻微或甚至在泛化精度上没有损失。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号