首页> 外文会议>Chinese conference pattern recognition and computer vision >A Sparse Substitute for Deconvolution Layers in GANs
【24h】

A Sparse Substitute for Deconvolution Layers in GANs

机译:GAN中反卷积层的稀疏替代

获取原文

摘要

Generative adversarial networks are useful tools in image generation task, but training and running them are relatively slow due to the large amount parameters introduced by their generators. In this paper, S-Deconv, a sparse drop-in substitute for deconvolution layers, is proposed to alleviate this issue. S-Deconv decouples reshaping input tensor from reweighing it by first processing it with a sparse fixed filter into desired form then reweighing them using learnable one. By doing so, S-Deconv reduces the numbers of learnable and total parameters with sparsity. Our experiments on Fashion-MNIST, CelebA and Anime-Faces verify the feasibility of our method. We also give another interpretation of our method from the perspective of regularization.
机译:生成对抗网络是图像生成任务中的有用工具,但是由于生成器引入的大量参数,训练和运行它们相对较慢。在本文中,提出了S-Deconv(一种用于反卷积层的稀疏替代)来缓解此问题。 S-Deconv通过首先使用稀疏的固定滤波器将输入张量处理成所需的形式,然后使用可学习的张量对其进行重新称重,从而将重塑输入张量与重新称量相去耦。这样,S-Deconv减少了稀疏的可学习参数和总参数的数量。我们在Fashion-MNIST,CelebA和Anime-Faces上进行的实验证明了我们方法的可行性。我们还从正则化角度对我们的方法进行了另一种解释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号