首页> 外文期刊>Neurocomputing >An edge guided coarse-to-fine generative network for image outpainting
【24h】

An edge guided coarse-to-fine generative network for image outpainting

机译:An edge guided coarse-to-fine generative network for image outpainting

获取原文
获取原文并翻译 | 示例
           

摘要

Deep-learning based generative models have achieved outstanding performance in various image pro-cessing tasks. This paper introduces a method to address the problem of extrapolating or outpainting visual context. When the input size is a small proportion of the output size, a limited amount of informa-tion is present to regenerate a semantically coherent image. This task is challenging because the missing region of the original image may include crucial semantic and spatial structural information, which is dif-ficult to predict from the input. We propose a three-stage edge-guided coarse-to-fine generative network model, consisting of a contextual inference network, structural edge map generator and edge enhanced network, to synthesise semantically consistent output from small picture inputs. Our model adopts a gradual growth inference strategy in the contextual inference network so that the generated image can present a more coherent structure, and this result can support the structural edge map generator to generate a reasonable edge map in a large missing area. Combining the contextual inference network and structural edge map generator outputs enables the edge enhanced network to generate more con-vincing images. We evaluate our model using four public datasets: CelebA, Places2, Oxford Flower102and CUB200. Our experimental results demonstrate that the proposed image outpainting net-work can successfully regenerate high-quality images with a large missing region even when some struc-tural features are lost in the input images.(c) 2023 Elsevier B.V. All rights reserved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号