首页> 外文OA文献 >Multi-Stage Variational Auto-Encoders for Coarse-to-Fine Image Generation
【2h】

Multi-Stage Variational Auto-Encoders for Coarse-to-Fine Image Generation

机译:用于粗致细图像生成的多级变分自动编码器

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Variational auto-encoder (VAE) is a powerful unsupervised learning frameworkfor image generation. One drawback of VAE is that it generates blurry imagesdue to its Gaussianity assumption and thus L2 loss. To allow the generation ofhigh quality images by VAE, we increase the capacity of decoder network byemploying residual blocks and skip connections, which also enable efficientoptimization. To overcome the limitation of L2 loss, we propose to generateimages in a multi-stage manner from coarse to fine. In the simplest case, theproposed multi-stage VAE divides the decoder into two components in which thesecond component generates refined images based on the course images generatedby the first component. Since the second component is independent of the VAEmodel, it can employ other loss functions beyond the L2 loss and differentmodel architectures. The proposed framework can be easily generalized tocontain more than two components. Experiment results on the MNIST and CelebAdatasets demonstrate that the proposed multi-stage VAE can generate sharperimages as compared to those from the original VAE.
机译:变形式自动编码器(VAE)是一种强大的无监督学习框架,对象生成。 VAE的一个缺点是它将模糊图像模糊到其高斯度假,从而产生L2损失。为了允许VAE生成高质量的图像,我们会增加解码器网络的容量,通过雇用残余块和跳过连接,也可以实现高效化。为了克服L2损失的限制,我们建议以多阶段的方式从粗略到罚款产生。在最简单的情况下,所谓的多级VAE将解码器划分为两个组件,其中基于第一组件生成的课程图像生成细化图像。由于第二组件独立于VaeModel,因此它可以采用超出L2损耗和不同模型架构的其他损耗函数。所提出的框架可以很容易地推广到两个以上的组件。 MNIST和Celebadatasets的实验结果表明,与原始VAE的那些相比,所提出的多级VAE可以产生Sharperimages。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号