...
【24h】

Deep quantization generative networks

机译:深量化生成网络

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Equipped with powerful convolutional neural networks (CNNs), generative models have achieved tremendous success in various vision applications. However, deep generative networks suffer from high computational and memory costs in both model training and deployment. While many efforts have been devoted to accelerate discriminative models by quantization, effectively reducing the costs for deep generative models is more challenging and remains unexplored. In this work, we investigate applying quantization technology to deep generative models. We find that keeping as much information as possible for quantized activations is key to obtain high-quality generative models. With this in mind, we propose Deep Quantization Generative Networks (DQGNs) to effectively accelerate and compress deep generative networks. By expanding the dimensions of the quantization basis space, DQGNs can achieve lower quantization error and are highly adaptive to complex data distributions. Various experiments on two powerful frameworks (Le., variational auto-encoders, and generative adversarial networks) and two practical applications (i.e., style transfer, and super-resolution) demonstrate our findings and the effectiveness of our proposed approach. (C) 2020 Elsevier Ltd. All rights reserved.
机译:配备强大的卷积神经网络(CNNS),生成模型在各种视觉应用中取得了巨大的成功。然而,深度生成网络在模型训练和部署中遭受了高计算和内存成本。虽然许多努力已经致力于通过量化加速歧视模型,但有效降低了深度生成模型的成本更具挑战性,并且仍未开发。在这项工作中,我们调查将量化技术应用于深度生成模型。我们发现,保持尽可能多的信息以获得量化的激活是获得高质量生成模型的关键。考虑到这一点,我们提出了深度量化的生成网络(DQGNS),以有效地加速和压缩了深度生成网络。通过扩展量化基空间的尺寸,DQGN可以实现较低的量化误差并对复杂数据分布非常适应。两个强大的框架(LE。,变形式自动编码器和生成的对抗网络)和两个实际应用(即风格转移和超级分辨率)的各种实验证明了我们的研究结果和我们所提出的方法的有效性。 (c)2020 elestvier有限公司保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号