首页> 外文期刊>Pattern Recognition: The Journal of the Pattern Recognition Society >All-in-focus synthetic aperture imaging using generative adversarial network-based semantic inpainting
【24h】

All-in-focus synthetic aperture imaging using generative adversarial network-based semantic inpainting

机译:全焦于聚焦的合成孔径成像使用生成的对抗网络的基于网络语义修正

获取原文
获取原文并翻译 | 示例
           

摘要

Occlusions handling poses a significant challenge to many computer vision and pattern recognition applications. Recently, Synthetic Aperture Imaging (SAI), which uses more than two cameras, is widely applied to reconstruct occluded objects in complex scenes. However, it usually fails in cases of heavy occlusions, in particular, when the occluded information is not captured by any of the camera views. Hence, it is a challenging task to generate a realistic all-in-focus synthetic aperture image which shows a completely occluded object. In this paper, semantic inpainting using a Generative Adversarial Network (GAN) is proposed to address the above-mentioned problem. The proposed method first computes a synthetic aperture image of the occluded objects using a labeling method, and an alpha matte of the partially occluded objects. Then, it uses energy minimization to reconstruct the background by focusing on the background depth of each camera. Finally, the occluded regions of the synthesized image are semantically inpainted using a GAN and the results are composited with the reconstructed background to generate a realistic all-in-focus image. The experimental results demonstrate that the proposed method can handle heavy occlusions and can produce better all-in-focus images than other state-of-the-art methods. Compared with traditional labeling methods, our method can quickly generate label for occlusion without introducing noise. To the best of our knowledge, our method is the first to address missing information caused by heavy occlusions in SAI using a GAN. (C) 2020 Elsevier Ltd. All rights reserved.
机译:遮挡处理对许多计算机视觉和模式识别应用提出了重大挑战。近年来,合成孔径成像(Synthetic Aperture Imaging,SAI)被广泛应用于复杂场景中遮挡物体的重建。然而,在严重遮挡的情况下,它通常会失败,尤其是当遮挡的信息没有被任何摄像机视图捕获时。因此,生成真实的全聚焦合成孔径图像来显示完全被遮挡的物体是一项具有挑战性的任务。针对上述问题,本文提出了一种基于生成对抗网络的语义修复方法。该方法首先使用标记方法计算遮挡对象的合成孔径图像,以及部分遮挡对象的阿尔法蒙版。然后,通过聚焦每个摄像头的背景深度,使用能量最小化来重建背景。最后,使用GAN对合成图像的遮挡区域进行语义修复,并将结果与重建的背景进行合成,以生成真实的全聚焦图像。实验结果表明,该方法可以处理严重的遮挡,并能产生比其他先进方法更好的全聚焦图像。与传统的标记方法相比,该方法可以在不引入噪声的情况下快速生成遮挡标记。据我们所知,我们的方法是第一个使用GAN解决SAI严重闭塞导致的信息缺失的方法。(C) 2020爱思唯尔有限公司版权所有。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号