The developments of generative adversarial networks (GANs) make it possible to fill missing regions in broken imageswith convincing details. However, many existing approaches fail to keep the inpainted content and structures consistentwith their surroundings. In this paper, we propose a GAN-based inpainting model which can restore the semantic damagedimages visually reasonable and coherent. In our model, the generative network has an autoencoder frame and thediscriminator network is a CNN classifier. Different from the classic autoencoder, we design a novel bottleneck layer inthe middle of the autoencoder which is comprised of four dense-net blocks and each block contains vanilla convolutionlayers and dilated convolution layers. The kernels of dilated convolution are spread out and result in an effectiveenlargement of the receptive field. Thus the model can capture more widely semantic information to ensure the consistencyof inpainted images. Furthermore, the multiplex of different level’s features in each dense-net block can help the modelunderstand the whole image better to produce a convincing image. We evaluate our model over the public datasets CelebAand Stanford Cars with random position masks of different ratios. The effectiveness of our model is verified by qualitativeand quantitative experiments.
展开▼