首页> 外文会议>International Conference on Optoelectronic Imaging and Multimedia Technology >Semantic image inpainting with dense and dilated deep convolutional autoencoder adversarial network
【24h】

Semantic image inpainting with dense and dilated deep convolutional autoencoder adversarial network

机译:用密集和扩张深卷积的自动化器对抗网络的语义图像染色

获取原文
获取外文期刊封面目录资料

摘要

The developments of generative adversarial networks (GANs) make it possible to fill missing regions in broken imageswith convincing details. However, many existing approaches fail to keep the inpainted content and structures consistentwith their surroundings. In this paper, we propose a GAN-based inpainting model which can restore the semantic damagedimages visually reasonable and coherent. In our model, the generative network has an autoencoder frame and thediscriminator network is a CNN classifier. Different from the classic autoencoder, we design a novel bottleneck layer inthe middle of the autoencoder which is comprised of four dense-net blocks and each block contains vanilla convolutionlayers and dilated convolution layers. The kernels of dilated convolution are spread out and result in an effectiveenlargement of the receptive field. Thus the model can capture more widely semantic information to ensure the consistencyof inpainted images. Furthermore, the multiplex of different level’s features in each dense-net block can help the modelunderstand the whole image better to produce a convincing image. We evaluate our model over the public datasets CelebAand Stanford Cars with random position masks of different ratios. The effectiveness of our model is verified by qualitativeand quantitative experiments.
机译:生成的对抗性网络(GANS)的发展使得可以在破碎的图像中填补缺失的区域有令人信服的细节。但是,许多现有方法未能保持染色的内容和结构一致与周围的环境。在本文中,我们提出了一种可恢复语义损坏的GaN的染色模型图像视觉上合理和连贯。在我们的模型中,生成网络具有AutoEncoder帧和鉴别器网络是CNN分类器。与经典的AutoEncoder不同,我们设计了一种新颖的瓶颈层AutoEncoder的中间,由四个密集净块组成,每个块包含Vanilla卷积层和扩张的卷积层。扩张卷积的核展开并导致有效加接收领域的扩大。因此,模型可以捕获更广泛的语义信息以确保一致性染色的图像。此外,每个密集网块中不同级别的功能的多路复用可以帮助模型更好地了解整个图像以产生令人信服的图像。我们通过公共数据集Celeba评估我们的模型和斯坦福汽车与随机位置掩盖不同比率。我们模型的有效性通过定性来验证和定量实验。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号