首页> 外文会议>International Conference on Pattern Recognition >Speeding-up pruning for Artificial Neural Networks: Introducing Accelerated Iterative Magnitude Pruning
【24h】

Speeding-up pruning for Artificial Neural Networks: Introducing Accelerated Iterative Magnitude Pruning

机译:用于人工神经网络的加速修剪:引入加速迭代幅度修剪

获取原文

摘要

In recent years, Artificial Neural Networks (ANNs) pruning has become the focal point of many researches, due to the extreme overparametrization of such models. This has urged the scientific world to investigate methods for the simplification of the structure of weights in ANNs, mainly in an effort to reduce time for both training and inference. Frankle and Carbin [1], and later Renda, Frankle, and Carbin [2] introduced and refined an iterative pruning method which is able to effectively prune the network of a great portion of its parameters with little to no loss in performance. On the downside, this method requires a large amount of time for its application, since, for each iteration, the network has to be trained for (almost) the same amount of epochs of the unpruned network. In this work, we show that, for a limited setting, if targeting high overall sparsity rates, this time can be effectively reduced for each iteration, save for the last one, by more than 50%, while yielding a final product (i.e., final pruned network) whose performance is comparable to the ANN obtained using the existing method.
机译:近年来,由于这种模型的极端过度分化,人工神经网络(ANNS)修剪已成为许多研究的焦点。这促请了科学的世界来调查了在安斯中重量结构简化的方法,主要是努力减少培训和推论的时间。 Frankle和Carbin [1],后来的Renda,Frankle和Carbin [2]介绍并精制了一种迭代修剪方法,能够有效地修剪其参数的一部分大部分的网络,几乎没有表现损失。在缺点方面,这种方法需要大量的应用程序,因为对于每次迭代,必须培训网络(几乎)的培训量为联合网络的同一量的时期。在这项工作中,我们表明,对于有限的设置,如果针对高整体稀疏性速率,则可以有效地减少每次迭代的时间,持续超过50%,同时产生最终产品(即,最终修剪网络)其性能与使用现有方法获得的ANN相当。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号